Thrilled to share some crucial insights from my recent project with Microsoft, where we aimed to improve the user experience of their open-source accessibility product - Accessibility Insights! Coming from an accessibility background, this project flipped the script for me. Instead of focusing on products for people with disabilities, I was helping developers build those products! This shift in perspective was both challenging and rewarding, pushing me to think about communication and usability in entirely new ways. One of the highlights was conducting user testing with individuals with low vision and blindness. It was a humbling experience to witness firsthand how even small design decisions can have a profound impact on accessibility. Here are a few key takeaways for anyone looking to conduct research with users with low vision/blindness: 1. Master the art of narration: Practice describing every element on your page in detail, following a consistent order (left-to-right, top-down, etc.). Bonus points for actually using a screen reader or observing someone else use it! This helps you understand how information is presented to your participants. Here's a video that gave me some context on screen reader functionalities - https://lnkd.in/dkQDMCg3 2. Layering information is key: Start with a high-level overview of your interface layout, then dive into the specifics of each element. Think of it like building a mental map for your participants. 3. Embrace the buffer in remote testing: Technology can be unpredictable, especially when it comes to accessibility. Choose a platform you know is accessible and inform participants beforehand. Buffer extra time, communicate platform usage clearly, and consider pre-test practice sessions – these small steps can avoid frustrating delays. Sharing these insights to fuel more inclusive design! I'm incredibly grateful to Carrie Bruce for giving us this amazing opportunity and to Nandita Gupta, CPACC for being an amazing advisor and friend throughout the process. #accessibility #uxresearch #inclusivedesign #Microsoft #usertesting
Accessibility in Digital Services for the Visually Impaired
Explore top LinkedIn content from expert professionals.
-
-
I learn digital accessibility best practices from our Product Diversity Office and members of ABLE* all the time. For example, a few tips that social media pros could keep in mind as they celebrate GAAD** online this week are: 💬 For promo videos, remember to include accurate closed captions displaying spoken dialogue and background noises. 🗣 Also consider including a voiceover narration that verbally describes visual elements, actions, and scenes in the video to ensure viewers with visual impairments fully understand the content. 🎨 Why not opt for high-contrast color schemes? This enhances readability for individuals with low vision. Plus, use clear, easy-to-read fonts for any text displayed on the video. 📜 Lastly, offer a text transcript of the video content alongside the video itself to provide an alternative format for accessing the video's information—a benefit to individuals who prefer reading or using screen readers. --- My LinkedIn post needs footnotes 😆 * ABLE (A Better Lenovo for Everyone) ** GAAD (Global Accessibility Awareness Day)
-
#Accessibility Tip: Increase Low Vision Readability. Use high contrast colors to improve readability for users with vision impairments. Readability directly affects comprehension. When text is difficult to read due to small size, poor contrast, or cluttered design, individuals with low vision may struggle to grasp the meaning and context of the content. By increasing readability, we enable them to understand and engage with the information effectively, promoting better comprehension and engagement with the content. Increasing readability also ensures compatibility with assistive technologies that individuals with low vision may use, such as screen magnifiers or text-to-speech software. These tools rely on clear and well-designed text to provide accurate information to users. By prioritizing readability, we ensure that the content can be effectively accessed and processed by assistive technologies, facilitating a seamless user experience for individuals with low vision. You want at least a 3 to 1 contrast between headlines and large text and 4.5 to 1 between regular/small text and the background. How do you check? - color.adobe.com - TPGI Color Contrast Analyzer - EightShapes Contrast Grid - WebAIM color contrast checker Which ones do you use? By focusing on increasing readability, we promote inclusive design principles that prioritize the needs of individuals with low vision. By considering factors such as font size, contrast, and layout, we create content that is accessible and usable by a broader range of users. This fosters a more inclusive and equitable digital environment where everyone, regardless of visual abilities, can engage with content on an equal footing. #a11y #accessibility #inclusiveDesign
-
How AI is Bridging the Gap Between Vision and Language with Multimodal? Imagine an AI that can understand text and analyze images and videos! Multimodal: These advanced models are breaking new ground by integrating vision and language capabilities. Merging Text & Vision: They transform both textual and visual data into a unified representation, allowing them to connect the dots between what they see and what they read. Specialized Encoders: Separate encoders handle text and visuals, extracting key features before combining them for deeper processing. Focused Attention: The model learns to focus on specific parts of the input (text or image) based on the context, leading to a richer understanding. So, how can we leverage this exciting technology? The applications are vast: Image Captioning 2.0: MM-GPTs can generate detailed and insightful captions that go beyond basic descriptions, capturing the essence of an image. Visual Q&A Master: Imagine asking a question about an image, and MM-GPTs can analyze the content and provide the answer! Smarter Search: MM-GPTs can revolutionize image search by allowing users to find images based on textual descriptions. Immersive AR/VR Experiences: MM-GPTs can dynamically generate narratives and descriptions within AR/VR environments, making them more interactive and engaging. Creative Text Generation: Imagine MM-GPTs composing poems or writing scripts inspired by images, blurring the lines between human creativity and machine generation. Enhanced Accessibility: MM-GPTs can generate detailed audio descriptions of images, making the digital world more inclusive for visually impaired users. The future of AI is undeniably multimodal, and MM-GPTs are at the forefront of this exciting new era. #AI #MachineLearning #NaturalLanguageProcessing #ComputerVision #MultimodalLearning #Innovation #FutureofTechnology
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Innovation
- Event Planning
- Training & Development