>> MICHAEL BECK: Welcome to technica11y, the webinar series dedicated to the technical challenges of making the web accessible. This month, our presenter is Michele Williams, Senior UX Researcher for Accessibility at Pearson's Higher Education Division.
>> MICHAEL BECK: Hi, everybody! Welcome to this May edition of Technica11y. I'm Michael Beck, the Operations Manager at Tenon. It's May Day! The fires of Beltane have been lit and summer is fast approaching, for those in the Northern Hemisphere, at least! Today, we have the our very own May Queen in the form of Michele Williams, the senior user experience researcher at Pearson. Good morning, Michele!
>> MICHELE WILLIAMS: Good morning, how are you?
>> MICHAEL BECK: I'm doing well. Michele is going to be discussing something we have yet to delve into during our sojourns on the seas of accessiblity, and that's effective usability research. Lots of times user experience research is performed at the front and back end of the cycle and, much like having accessibility in mind while developing accessible content, keeping the user experience in mind is really something that should be a part of every stage as we go through the cycle. So, as always, please add any questions to the chat box in Zoom and we'll get to them at the end. And, without further ado, take it away Michele.
>> MICHELE WILLIAMS: Awesome, thank you, Michael! Good morning, afternoon, evening to everyone that's joined and, "Hi!" to YouTubers later on. Thank you for having me and I'm really excited about this topic. The title of the presentation again is, "Conducting Accessibility User Research: What's Really Needed?" I'm Michele Williams, I'm a senior UX Researcher focused on accessibility within Pearson's Higher Ed but have done a lot of user experience type jobs and schooling all along the way. So, I'll be pulling from a lot of those experience today as we walk through this. And just to go ahead and dive right in, I know this is a topic that's been proliferating more, which is great. We do want to make sure that we are inclusive in our research on our products, our technology products, as much as possible. But we want to make sure that we're doing it right. So, let's see if I can get my slides to advance. Perfect. Really simple: "What's really needed?" I'll break this down into three main sections: "Knowing the Basics," "Accessible Artifacts," and "Accessible testing." And we're just going to get right to it.
So, "Knowing the Basics." Basically, first and foremost, you have to understand disability. Now, the caveat for this presentation is I'm mostly focused on UX professionals or I'm assuming that you work with UX professionals, this is not going to be a talk where you can take this and go off without that baseline understanding of human-computer interaction and user experience. But, assuming you have those fundamentals, it's still important to understand the diversity and manifestations of disability and what it will look like with the participants that you're interacting with. There's no getting around that. One thing that came to mind as I was putting together the presentation, I thought about the equivalent scenario of, say, there's a community activist who wants to get feedback on a building.
They are revitalizing a building, they want diverse voices because this will be a communal space. They go to meet someone outside of that building who is in a wheelchair. They are like, "Okay, come on in! We want to make this an interactive talk," and the person in the wheelchair says, "Okay, where is my entrance?" And let's say that community activist kind of, their face drops because they realize the only way into that building is through the stairs. So, that could be the equivalent of what happens with our user research. We don't just want to dive in because we may be diving in in a way that's not going to allow the participants to be involved because something is not necessarily accessible to them.
So understanding disability and then also next understanding the laws and standards are going to be important, as well. You don't have to become an expert, but again, you have to understand what it means for something to be or not be accessible. And I've broken this down because I understand that people might be using technology or building technologies all across the spectrum. So, web and applications that's going to be, at least for the U.S., something called Section 508. Also the international guidelines, the Web Content Accessibility Guidelines (WCAG). Or hardware may again be Section 508 or the Americans With Disabilities Act. For somewhat voice telecommunication based, again Section 508 will have standards in there for you, but also the 21st Century Communications and Video Accessibility Act, the CVAA. So understanding the laws that are applicable to your country, applicable to what you're building, applicable to the population you're interested in interacting with, just having some understanding or even going out and getting training on those acts, on those laws, on those guidelines will be beneficial to you.
Also, getting immersed and I'm going to step through this one only because one great resource that I find that people may not be necessarily aware of, again, this is somewhat in the States but it's likely applicable in other countries, as well, are Assistive Technology Centers. These are Government sponsored statewide centers focused on matching clients to assistive technology. Usually a state will have several in key locales throughout the state. And it is a service where people who have been diagnosed with a disability or are progressing, maybe they have a deteriorating condition, can come to these Assistive Technology Centers where they literally have all kinds of the latest and greatest hardware, software, and all in between, in the facility for the user to try out. They speak with an assistive technology professional and they match them and actually loan them assistive technology to try out before they go and buy it, which usually is an expense.
So, those centers are great places with very friendly people. usually with disabilities themselves. who want to be advocates and will allow you to spend a half a day. a couple of hours. or even a whole day. or come out to your facility and give a talk to explain to you what assistive technology is, how it's used, who it helps, and it's really educational, as well. And just fun! I like going and just learning the latest and greatest AT. So, that's one resource, as well, for getting immersed. As well as disability organizations, we know that there are disability-specific national and local chapters of organizations but also check your local library; there may be accessibility meetups there. Literally, meetup.com sometimes has accessibility meetups and then there are again disability specific email lists and groups that may be publicly available that you could join or tap into to start to get immersed and understand the communities. And then lastly, industry professionals.
So, getting your training, getting your resources that happens through some sort of mechanism. And those industry professionals, whether they have a disability or not, are still going to be a valuable resource to help get you onboarded into again the populations or the general spectrum of disability that you may need. So, for example, orientation and mobility specialists. That is going to be the group that helps to kind of train on how to navigate the world accordingly, if you don't have vision.
Assistive Technology Specialists, like we talked about at the Accessibility Centers. Rehabilitation Specialists. Whatever those key stakeholders are, they may be willing to sit down with you and have some communication to explain, again, the depth and breadth of disability.
And then, lastly, understand the current state of where your project is. So, for related work, if we go back to research that has been done prior, too, make sure you seek out any specific accessibility focused publications in that area.
So, for the ACM, the Association for Computing Machinery, they have a Special Interest Group on accessibility called Access. So you would want to not just look at Chi papers or CSCW papers, but Access papers, as well. For higher education, there's AHEAD. They have a conference called Higher Ground. So, specific accessibility related publications, journals, conferences, you'll want to seek that out and learn from related work. Also, understand the competition if you're making a product that has a similar or competitor product to it then you'll want to go ahead and evaluate it for its accessibility and understand, maybe even use that, in some of your early evaluations with users. And then, finally, what's your baseline? Particularly if you're revamping something that's already in production, go ahead and get an accessibility consulting firm to come in and give you a formal audit of your product.
So, that will help you to understand what's maybe an issue and why and how you go about fixing it so that you have that baseline understanding. If we go back to my example of the person who is brought to the building, if that community advocate understood ADA or it had someone explain ADA to them, they would have already known that that space was not going to be conducive for having a person in a wheelchair and meet them there. They could have then altered how they wanted to meet up with that person.
So hopefully that's kind of making sense there.
The long be and short of that is this, the user testing is not an audit. You are responsible for testing accessibility. Your participants are testing usability. And that seems kind of obvious but actually it happens a lot that people think I'm, for instance, bringing in participants to test the accessibility of our system: that's not true. I'm bringing in participants because our systems are accessible and now we're trying to make sure that those systems are user friendly, understandable, all the other attributes that we're using are bringing users in for who don't have disabilities. So that's why it's going to be key again to understand accessibility, understand your baseline with an audit, and then proceed from there.
So, "Accessible Artifacts." I'll call them artifacts because I know people are working in different technology spaces, but ultimately, you're user testing something, some thing, some object. And to borrow a little bit from the diagram that's sort of a similar user experience diagram from Jessie Garrett, the Elements of User Experience, he has an arrow in his diagram of abstract to concrete, representing the project phases. So, we know that project phases, outside of UX just in general, even when putting together podcast presentations, they move from abstract to concrete, so from high level ideas all the way to finished product. And in between, those artifacts look the same, so you have something that's just a high level idea. Maybe you move to sketches, then prototypes, and finally your final product, as you're moving along that spectrum of the project phases.
So, the key question is, "What's needed to conduct successful inclusive research with these artifacts?" As Michael introduced in the beginning, a lot of times research tends to happen at the beginning and the end, so when you have high-level ideas and when you have a finished product. But we can also do research all along the way with users with disabilities the way that we do users without disabilities.
So, some considerations for your testing artifact, just to keep top of mind as you're going about to use your artifact, does it require clear, usable vision? Sharp hearing? The ability to talk, the ability to walk, the ability to stay focused or just in general staying focused? Grasping objects?
If your artifact requires these things, you're going to need to start to pivot because hopefully you're seeing that what you want is to make sure that your artifact can be used by a diverse population which means it can't be a requirement to have certain physical abilities.
Now, in the ideas phase, again, that's just high level. You have a scope or project in mind. You want to validate the idea. That's going to be more concerned with the method of research rather than the artifact itself; there is no artifact at that time. So we're going to revisit that one in the next section.
But, once you start to get to the sketching part, it's important to keep in mind to go beyond, and I put in quotes, "paper." Meaning something that's either actually paper based or the equivalent of something that's just very visual based, because again, you're going to leave out certain populations when you do that.
So, in the columns on this slide, the first being web and application, instead of going maybe paper based, try to leverage accessible technologies like Microsoft Office to increase compatibility with assistive technology. If you go a little more digital and a little more digital and accessible, then someone who uses assistive technology, for instance, is going to be more able to participate.
"Hardware." If you're building something, use inexpensive three dimensional objects that users can both see and more easily manipulate, like Play-Doh, styrofoam blocks, 3D prints. Again, you may still be sketching as a designer or someone who is creating this, or your design teams may be doing that, but in terms of what you're presenting to users, it's okay to go ahead and try to get those molds a little more feasible to use so everybody has an opportunity to participate, so if somebody doesn't see paper or paper would be difficult for them to manipulate, you can still give them the opportunity to independently give you feedback in another way.
And for voice-based systems, the only thing that really came to mind, there's so much text-to-speech technology out there just keep in mind those technologies and being able to leverage those while you're, again, going about your sketching. As well as, again, leveraging any comparable products that you could use or anything that is existing and accessible that you could start to bring in to get that feedback at that sketch phase as well as maybe cognitive walk throughs and talking through the idea, as well.
Then, as we move to prototypes, we're getting a little closer to the real thing, so we're making more realistic artifacts. Well, for web and applications, you're really going to want to start to create throwaway code, meaning code you're okay with throwing away if it doesn't work, but code nonetheless. Because we need to be careful of tools that call themselves prototyping tools, they are not going to yield accessible products, meaning they may yield something that most users can use, but once you start to introduce that, someone needs to use just a keyboard to navigate or their voice to navigate, just a screen reader or some other assistive technology to really bring in some of the information like on the web, then those things are going to start to break down pretty quickly. Essentially, some of the prototyping tools create the equivalent of a paper prototype. You want to be careful saying, "We'll use this." Really investigate what you'll use. The way that we sloved this at Pearson is to create a prototyping team. I know it's not always feasible, but I recommend it if you can. Also, for hardware, you're going to want to go to the closest equivalent. GoPro cameras, tablets for computer monitors, if you're making something like, for kiosks, get to as close as equivalent hardware that's okay for your population to use, and, lastly, for voice, again, leveraging, maybe creating, those realistic voices and also the existing accessible telecommunication devices, SmartPhones or, again, mobile devices that will help you to get your feedback on a prototype.
Now, I'm going to take a moment here and actually really delve into one research method known as the "Wizard of Oz." So we're not going to talk about the movie, moreso the research method. The Wizard of Oz, as I call it, is a "fake it until you make it" testing method. So, it's a method in which the users think they are using a real system, but the researchers are creating that illusion. This is a really powerful method in general, but also for creating accessible technology, where we know that there's a lot more consideration sometimes that needs to be put in, particularly if teams aren't 100% familiar with what to look out for. Before you, again, start digging really deep into coding and all of those or manufacturing, Wizard of Oz can really save you a lot. Now, it takes a lot of planning and a lot of detail to run, but it can be worth it, particularly if you're going to run it multiple times; the setup and scripting time may be worth it.
I have two examples times that I've run this it that actually aren't related to accessibility research but to explain more about Wizard of Oz and how it works. The first is back at a previous job, I was a voice user and interface designer for what's called an IVR, interactive voice response. Basically, the telephone systems where you talk to a computer instead of a person. I know, I'm sorry. But I worked on those early in my career and we used Wizard of Oz a lot to test our script. Now, this was back in the day when people, first of all, worked in an office and, second of all, had a landline desk phone. Those things may be becoming more and more obsolete but leveraging that desk phone, all we did was had the user dial a program number. We'll save you from keep dialing 1-800 whatever; just hit this button. That rang to someone's desk phone and the designer acted out the script. So, we would put on our best voice talent voice and pretend we were the system that the user was navigating through. By doing that, we were able to get interface validation before all of the big production of getting the actual voice talent to record all of that and do all of the development. So, in the background the designer had a spreadsheet, either in Word or Excel, whatever their preference was, of what they should be saying based on the user's responses and then, of course, if there was something went wrong or if the user used a touch pad and we weren't able to detect what they pressed on the phone versus spoke, then we had a failsafe of, "There's an error," and then you hang up. So, a useful technique, particularly for voice based interaction. And similarly, I believe this is known, that Pearson is coming out with an Alexa skill and one of the things we were planning to do and we didn't quite run it but I knew it was going to work was test out our Alexa skill with users. At the time, what we wanted to test not only was the path, which I helped write because I was a VUI designer, but, we tested the path but also what users would expect to say. So, when you're designing something you have to design what's called the grammar: the response that the user will say. And for the Alexa apps and some of these home assistant apps, they wanted to be smart where you don't have to think so much about the command; it just knows what naturally you would want to say. We were still struggling to know what users would want to say. Also, with Alexa, there are some global commands and we didn't want a user to accidentally say one of those global commands and come out of the app that we had built up and say, into the Music or some other app, on the actual device. So, my suggestion: there's a lot of speakers that look like Amazon devices. Let's just have them talk to the speakers. Now, it wouldn't have the little blue ring, but you can kind of play it up, like, maybe the light is broken or something like that, if they really understood what the Alexa devices look like. But, in the meantime, we would have the developer in the room, because our researcher and developer could be in the lab, and the developer looks like someone who was taking notes, but they could play the files really. So there was a simulator for Alexa, so you could record all the phrases that Alexa was supposed to be able to say and have them, again, in a spreadsheet and have it connected to the Bluetooth and then you would just play the appropriate file based on what the user said so with that we could determine what users wanted to be able to say and get our grammar and keep the study under control by preventing deviations.
So, those are just two real world examples. I also did a really complex one related to accessibility testing, but it was so complex I don't remember all of the setup, but it had to do with simulating a navigation device and we had a combination of microphones and GoPro cameras and we used an AAC alternative and augmentative communication device for text-to-speech. It was really complex, so you can get really detailed in how you go about doing Wizard of Oz, but I just wanted to make a note of that and kind of detail a little bit to say that.
But, then lastly, once you get to the final, the real full product, it has to be comlpliant with whatever the appropriate standards and laws would be for your technology. The long and short of it: that product needs to be baseline accessible, meaning it meets the baseline standards. Going back to what we talked about in the first section, so, if your system is not accessible, you need to work with other methods and work with your auditors and contractors until you can get to that point because we want to be cautious about bringing in users to test what is essentially a broken system. If it's not accessible, it's broken. You can have consultants with disabilities work with you on how to fix a system you're working with or make an update, but that's different than user research, even though that person may have a disability and may be a user of, say that assistive technology, their expertise is going to be so far out that that's not going to be a reality of a user that you would typically bring in. You still need to bring in users that don't have that kind of knowledge to test your system for the requisite usability standards.
The long and short of that: what the user is testing needs to be accessible. That again seems very basic, but that actually happens a lot with me, as well where people are asking me, "We really need to do user research," and I'm like, "Yes we really do! Give me something I can research with," and then we kind of have to have a discussion after that.
So, the last section does have a lot in it. I may bounce around a little bit just for time but, again, it's about accessible testing. So, the first and foremost being the research methods. Most research methods, you're just going to run without special caveats, but I went through a list to point out just a few things. I'm just going to bounce through this somewhat quickly.
These aren't in any particular order other than alphabetical. So, one that came to mind is Biometric Testing, which is gaining some traction. So, eyeytracking or other emotion based testing. Some users just won't be able to do that. They won't feel comfortable with the instrumentation. They won't be able to sit in the way you need them to. They don't use their eyes to navigate the computer. So, you're have to come up with equivalents. I don't have a good answer for that just yet. I haven't sat down and thought about it entirely other than to say you just need to be equally as thorough for that if you're going to use biometric testing and you're going to have to have triangulation of your data, not just biometric testing, if doing inclusive testing. For card sort and tree testing, just make sure it's an accessible tool or leverage accessible drag and drop to do that. That will be a theme: the tools, just like artifacts you're using, need to be accessible. When I say accessible, it's WCAG compliant if it's on the web.
Diary Study. I love diary studies. Just have multiple means of collection. So, there are a lot of tools out there that do diary collection. Unfortunately, most of those tools won't be accessible to all populations, so just be creative. They can use email, voicemail. They can send you or upload videos. They can do Word documents. And, just a note, blind people can and do take photos, so blind people can do diary studies, as well, so, that's just a note to put in there.
First Click Test. Again, you'll want an accessible webpage, not just an image of a page, and also you're going to want to detect keyboard events as well as mouse clicks from assistive technology, such as a voice dictation software. So, again, the key is be careful of the tools you're using. Just a caveat, again, some of this may be foreign. This is mostly for UX nerds on the call, so they will know what I mean by these different research methods.
For Focus Groups, you want to make sure everyone has opportunity to participate. Understand if there's a need to have alternative communication styles represented and also consider an alternative to just a live or remote focus group presentation. Maybe use a chat room or group forum where people can still bounce off ideas but it's more passive than active and asynchronous.
With Observations, know that if you're doing an observation in person, like you're following a person along in their daily life, just notice you'll probably look like an aide. People are going assume you're some sort of aide to that person; it happens. So, with that, you may want to consider instrumenting the participant with a video. have them carry a GoPro camera around, or do video on their phone as opposed to being present if the idea that you're going to be seen by others may impact your data collection.
Participatory Design (Code Design, also). That's when users are creating artifacts. You may have to create the artifact on behalf of your participant if they can't physically do that themselves or just to save time. But, if you have to do that, then make sure you're getting explicit confirmation from that user that you are representing that artifact correctly, you're representing their idea correctly, and maybe even consider leaving that artifact with that participant or with that group so that they can reflect on it and maybe even have a follow-up interview afterwards. I tend to do that with a lot of research. I'll do a follow-up afterwards after people have time to think about what you asked them, particularly in foundational research.
Online Surveys. Again, an accessible compliant tool. You'll also want to minimize the use of really complex question types. Try to keep it basic and easy to enter and also carefully word your demographic questions. Keep in mind that not everybody are identifies as having a disability. So to ask someone, "What's your disability?" may not be appropriate. Particularly, I think about the deaf community. They are not hearing impaired. They are deaf. So, be careful how you word questions like that as you're doing your demographic collection.
And then, finally, Usability Testing, probably our most famous kind of testing in the UX world. For usability testing, one thing to keep in mind is that participants don't want to look helpless. What I mean by that is if a person is struggling to use your system, there may be a tendency to really want to push and figure it out because the person doesn't want to feel like they can't, when really you need to just keep reassuring them that, "We know that and that's absolutely true. Our system is the one that's giving you issue, not the fact that you don't know how to use it or you don't know how to use your assistive technology." So, just be mindful of the fact that if someone starts to struggle; there's kind of an additional layer to how that may manifest.
In terms of unmoderated usability testing, I know that that's, you know, a thing. It makes sense for some testing methods like surveys, but generally, I don't do unmoderated and really try to make sure I'm present either remotely or in person. Because research is also educational.
So being able to have conversations around what a person is doing, why they are trying certain strategies, that information is useful beyond that specific research and can be educational to the team. Also, keeping in mind the measures of success, so like time on task is a big measure for usability studies. That may not be applicable if you have a diverse population. That may only be applicable if you have an exclusive population because, again, if someone is, say, using a joystick, they will naturally take longer to do a task, so that time on task measure may be skewed.
Same for System Usability Scales and Ease of Use metrics. You may need to separate those out, as well, because a system may be easier to use by one population but not by another. Like, maybe the blind users have a lot of difficulty, but mobility impaired users do not.
So, a synopsis of research methods can be found on the website. I have a link in the presentation there in Zoom and the presentation will be made available on SlideShare. I'll talk about that a little bit after but some of these links that are present you'll have access to.
Lastly, a few notes in terms of research methods about communication.
So, use interpreters if needed. Like right now, we have a captioner on our call today. So, determine if you need sign language interpreters. And if so, does that person or the people that you're bringing on, do they have recommendations for who they want to use? And also offer to pay for that service for them.
And if someone has a speech impairment, determine if there's a caregiver who can maybe help relay the messages on behalf of that person so that you can understand the interaction. But also be prepared to type. I recently did this as well. I had a participant who was hearing impaired. She could speak to me but I typed my questions.
So you'll want to just be prepared for that. And also understand that overall, your protocol is probably going to take more time in general because of these just extra bits, communication, using assistive technology, having to just level set. So, keep those things in mind as you're getting ready to conduct your research.
So, wrapping up here on a few more topics, recruiting is a big one. I don't have a whole lot to say there other than there's generally two ways to do it. One is relationship building. That happens over time. Those same resources from the earlier slide where you learned accessibility, the same resources you can use to get to know people that can participate in your studies.
The downside to that is you may limit your pool of participants. So, it may be that only certain personality types or only certain kind of people are involved in those kind of organizations. So you may want to be careful about whether or not you're really getting a diverse look at a certain population by using some of those communities that are available. But, nonetheless, those communities are available to you. And you'll want to look again locally as well as nationally and start to get involved. It's a snowball effect. But you can do it.
The other way is to use a recruiting firm and that's sort of the thing that I've taken. Because I'm not looking at one particular population. I need diversity all the time. But also, the downside to relationship building is we don't like what we call "career participants." We don't like to use participants over and over. So, it feels difficult to build a relationship with an organization and then say, "Okay, great, I used you one time," or, "I used these set of people one time and I really can't use them again." So it makes more sense to use a recruiting firm because they have a group of participants at the ready who are being used by different companies but for me it allows me to get unique participants. And the one that I've used most frequently, so far, is Knowbility. They have a database called Access Works and they have been really beneficial toward me in finding unique participants but the downside to that is they may not have a specific demographic I'm looking for because there's more breadth than depth.
There's a reality to recruiting, though, too. Minority group, I just call them Minority Group Implications. One, we talked about that people may just not be signed up for organizations. So, national XYZ organization does not encompass every single person with every single disability. So, you still may want to try reaching out to key stakeholders, but that takes more time, so reaching out to industry professionals, who can then reach out to people with disabilities, that extra layer takes more time. But, also, people are still breaking boundaries and barriers. I had the example of Haben Girma. People are still being the first to do something. So, if your product has a specific person population of user, it may be that the person with a disability has not yet risen to that level and that's the reality of it, so you want to be mindful of that, as well. There may actually just be a small amount of people that you can recruit from. But, still try to recruit and to overcome that, use appropriate proxies.
So, I have a paper later on, I have a related works section later on, one proxy that may be good for you is older adults. So, older adults might make a great proxy for your studies in terms of having a disability, as well as being available, and giving you similar feedback to what you may be looking for. And it may be easier to recruit.
Also, maybe you're not going to meet all of your research criteria. So, in my case I work with higher ed products, well, technically anyone can go back to school at any time and depending on the product, I sometimes have to make the decision that this is not going to be someone who is currently enrolled in college. It's just going to be someone who can understand a college level product. And that way I can have a larger pool of people to recruit from.
So, I know recruiting is a thing. Keep working at it, you can do it.
In the Logistics, just for time sake, I'm not going to go into too much detail here. I have it in the slides. The thing, again, to keep in mind is accessible at every touch point. From the consent form, the location, communicating that location, or the tool you're going to use, I recommend Zoom, the system that we're using right now, as a remote testing as opposed to some of the other testing tools.
Everything along the way needs to be accessible, and if you're not sure, ask your participants. But, I also have, not only in my slides, but I have a link to another presentation from UXPA Boston, where there was a phenomenal detail about facilitating and going into that. So, I'm sorry if you joined to hear about those logistics, please ask questions, but I am just going to for time, kind of skip through that. The meeting place, the consent forms, again, the paperwork, even the payment system, make sure all of those things meet accessibility. Because, again, otherwise you're going to leave out a participant.
Then finally with analysis and reporting, again, just very quickly, make sure you do your analysis in a way that you understand how to interpret. So, this goes all the way back to the beginning. You have to understand accessibility in order to understand how to interpret your results. And then, in terms of the reporting of that, make sure you include everyone. Everyone on your team needs to understand accessibility and your first person interactions with users are really going to be important for educating your stakeholders all throughout the company.
So, share out as much as possible. Create great videos. Use those videos to explain the guidelines that you're asking your teams to adhere to. Really get down into what, really leverage what research really can be used for, but, also, make sure that your analysis is keeping in mind how your participants use the computer and then how that manifests into your findings.
Now, we may have time to go back a little bit but, the long and short of it is, accessibility user testing need accessible tests. The protocol and facilitation details, those are going to make or break the data collection. So, I have closing thoughts, but, I just went through that, just wanted to point out again related work. So, whether it's some of the papers I've been involved in which will be listed on LinkedIn, which my page is public, so under publications on my LinkedIn, there's some papers that may be relevant in terms of foundational research examples, methods on participatory design, recruiting, and then also a paper that I did here at Pearson on, "How do I know you know accessibility?" So, I did that with Mallory Van Achterrberg, so understanding what it means to know accessibility, but also the papers I mentioned earlier on user research and what you need to know from the UXPA presentation. Also using older adults as a proxy and this paper on large user pool for accessibility research and then, lastly, the Knowbility Access Work link in case you want to use them for recruiting. I just want to be sure I left time for questions before I went back to sections I didn't delve into. So, Michael ,I'm ready for that.
>> MICHAEL BECK: Okay, before we actually get to any questions, I just want to make a note that I'll have the links for all of those papers in the YouTube description once the video is up. And hopefully a link to the slides, as well.
>> MICHELE WILLIAMS: Yes.
>> MICHAEL BECK: Perfect. The first question that we actually had was while you were talking about MS Office: how do you utilize that in user testing in particular?
>> MICHELE WILLIAMS: So, I've never necessarily needed to do that. But, the way that I would envision it is, if you're mocking up your designs, then instead of mocking it up in some of the other tools that may be available, one that comes to mind is InVision, mock it up in a Microsoft Office product, because that will give you some accessibility markup. So, headings, links, bulleted lists, those kind of artifacts, particularly if you're talking about web or applications, that kind of markup will be available to you through Microsoft Office or a product like that. That's not going to be available in some other tools. So, even if the person is understanding that they are not going through a real actual page, they can still navigate through more in the way that they typically would, like especially if they are using assistive technology or using the keyboard to navigate. So that's what I meant by that. And hopefully that answers that question.
>> MICHAEL BECK: Okay. That makes perfect sense to me. Kind of that infrastructure is already there.
>> MICHELE WILLIAMS: Right. Leverage the infrastructure rather than start it from scratch or not having it at all.
>> MICHAEL BECK: No need to remake the wheel. What are the challenges of doing research in a user's environment? I guess that would be more like instead of doing it remotely, going to a user's...
>> MICHELE WILLIAMS: I do talk about that. So, physical spaces. So, there's research that can happen in a lab and then there's research that can happen in public and sometimes in the home or in their physical, in their environment. For that, I would say it's actually not so much a challenge as it is boundary setting. So, you just want to make sure that you have the proper relationship with that person, that they are comfortable with you being in their space and in their environment. Then, particularly if it's their home or something kind of intimate, you just have an understanding of what the boundaries are, like you're staying in communal spaces, like their den or living room or kitchen, and not their bedroom unless you really absolutely need to be there. But I do find that kind of research, particularly if it's foundational, and particularly if it has to do with the type of product that you're making, is important to do. And usually users, particularly if you have that understanding, you have your proper consent form, and you have talked beforehand, they are very comfortable opening their home to you because they want to get that information out and it is important for them to convey maybe the scenarios that are important to highlight by being at home.
>> MICHAEL BECK: Okay. It sounds like a lot of the way to make this successful is to do a lot of groundwork beforehand, before you even get, obviously groundwork with the actual protocols themselves, but with the person themselves. Get to know them before you start working with them as a tester of something.
>> MICHELE WILLIAMS: Right. So, let me go through the logistics since we have some time. I just wanted to make sure that I wasn't going to be right up on the end. So, I'm going to double back to the logistics part and talk about the planning stages because that's absolutely correct. With any research, but particularly with accessibility research, it's all about the planning. And walking you through from start to finish and considering the details of it as you walk through.
I mentioned at high level, but starting with even the consent form, a lot of times I've seen them as read only documents because they have a special footer or not marked up correctly and then they require a signature to do that, a physical signature. We can't have any of that. So, we need to make sure we can either use email or verbal consent and it doesn't require a physical signature. We need to ensure it's readable by assistive technology, that the wording is not too legal or hard to understand. There's a WCAG guideline about the reading level of documents. And we need to ensure that participants can give their own consent and it doesn't need to also involve a guardian or caregiver.
The payment system on the backend. I've seen a lot of gift card systems, "Oh, yeah, log into this gift card system." Is that gift card system accessible? And then also don't compensate in payments that the participant can't use. Just something, make sure it's logical, something high level or cash.
And then, also: technical checks. Particularly if you're doing remote testing, which I do often. I use Zoom. I have the user come on a couple of days beforehand we do a 15 minute technical check and that's where I also begin to understand if there's any communication needs that are going to be important to know before we start. So, I'll understand if I need to type versus can speak. Can I see your computer. Can I hear your computer. All of that happens prior to.
And then, if they are coming into a lab, then you're going to want to be careful about equipment needs. Are they bringing their own equipment? Or are you providing that? I recommend they bring their own, but, sometimes it's not feasible, and if it really isn't feasible to have what they need in the lab, then you have to go to where they are. Or if they can't travel, then you need to go where they are. And then going back to physical spaces, if you're meeting in a lab, making sure you're giving street to door directions, making sure the space is ADA compliant and transit friendly, or you're providing compensation for transportation to your space. Communicating the exact meeting point and making sure the space won't cause any participant anxiety. What I mean by that is, is it quiet enough, is it easy to traverse and is the space itself comforting or does it feel like you're being tested? Work with your participants on that. Explain the space. Explain the study. And let them tell you what they are comfortable with and what they are not comfortable with. And then again meeting in public, make sure that public space is agreeable and conducive to research. Make sure you know your boundaries or even compromise on a meeting space that the person generally frequents. If they are a frequenter of a library meetup, then meet at that library space as well. Things like that.
>> MICHAEL BECK: That leads to a question from Sandra: how do you learn to adapt and understand the speech from someone who talks or sounds differently than you're used to.
>> MICHELE WILLIAMS: That's just a skill that's learned over time, I find. Because I really don't know how I do that. But, most of the time, I am successful in doing that. But if you're not, that's fine. Just make sure that person aware and come up with a way to get that interpretation. So, again, they may have a close family member or a caregiver who wouldn't mind sitting in on the study with you. And that person can interpret and make sure that that's just something that you guys are going to do and know that from the beginning. Or you may have to do something that is more text based rather than live conversation. If you think that you're going to just have too much difficulty communicating, then you may need to alter your research method to get that data a different kind of way.
>> MICHAEL BECK: That would be where that quick 15 minute Zoom a few days before would definitely be a big payoff for you. You would know almost right away.
>> MICHELE WILLIAMS: Yes and do that if you're going to meet in person, too, so meet up with that person before you travel and meet in that space. Go ahead and understand all of that ahead of time.
>> MICHAEL BECK: Okay. What are the advantages of testing on a user's own equipment as opposed to something you would provide?
>> MICHELE WILLIAMS: Every assistive technology, just like every other technology, has settings. So, first and foremost, is that that person is going to be comfortable with the keyboard layout? If they are using a device, there's different keyboard layouts, sometimes you have the extra number pad off to the side or things like that. So, they will be used to the keys on their keyboard and how those feel and how they interact and they will have their own if that's applicable to them and then they will have their own settings on the device. Also, there's different assistive technologies. So, if we talk about screen readers, there's three popular screen readers: there's VoiceOver on the Mac, there's JAWS, and then there's NVDA. You may not have a license to the one they primarily use and that may throw off your testing, as well.
So it's going to be important that you want the user as comfortable as possible and using your product in the way that they would typically use it. You don't want to introduce artificial conditions by forcing them to use an assistive technology that they don't generally use or in a way they don't generally use it. There's so many settings. Even if we talk about the height of something that's attached to someone's wheelchair or the other ergonomic considerations on some physical devices. I don't think that anyone would be able to accommodate all of those different considerations in their lab, particularly not in any kind of reasonable amount of time. So, it usually is better if the person can bring that with them, to just bring that with them. You really shouldn't be doing testing that requires a certain type of assistive technology to be used. If you're designing to the standards, the point of those standards is that they can then work with any user's assistive technology in those requisite settings that they have.
>> MICHAEL BECK: Okay. Well, we're nearing the end. Unless anyone has any final quick questions.
>> MICHELE WILLIAMS: I just want to make sure...I know I went through... Just making sure again you understand, it's an ecosystem. So your designers and developers need to know how to build accessible systems, the researchers need to know how to facilitate the research and interpret the findings. Your testers need to know how to test for accessibility and how people use assistive technology. Product managers need to prioritize accessibility and get it into the project. So, making sure it really does take a village to do this. But you can do it and you should do it.
>> MICHAEL BECK: It's holistic. And it's very odd hearing you say, "Oh, it's computers! It's holistic," but accessibility is holistic computing. It's making sure everybody is involved, not just the users, but from the get go, from the backend, from the frontend, making sure everybody knows what everyone else is doing and making sure everyone is doing it together and all of the pieces fit and that's the definition of being holistic: making sure everything is fitting together. So holistic computing. Interesting.
All right. Well, thank you Michele again for that excellent and thought provoking presentation. Somebody else mentioned before it was very thorough.
>> MICHELE WILLIAMS: Oh good.
>> MICHAEL BECK: I agree, that was great.
>> MICHELE WILLIAMS: I hope so.
>> MICHAEL BECK: And thank everyone who joined us today.
>> MICHELE WILLIAMS: Thank you everyone.
>> MICHAEL BECK: Next month we'll have Tenon's own Mallory van Achterberg on to discuss designing and coding for low vision. That will be on Wednesday June 5th at 11 a.m. So thanks again, Michele and to all of you. And we will see you all next month.