The second annual Sight Tech Global conference, a virtual and free event on December 1 and 2, will bring together the world’s leading experts in cutting-edge technologies, including AI, dealing with accessibility and assistive technologies for people who are blind or visually impaired.
Today we are excited to roll out the main stage program. These 10 fireside sessions and expert-led panels capture a remarkable sample of the work in progress at Apple, Microsoft, Google and Amazon, to name just four of the industry giants. We will be announcing another half-dozen breakout sessions in the coming days.
Please register today. It’s free, virtual, and very user-friendly for screen readers.
Sight Tech Global December 1
Designing for Everyone: Accessibility and Machine Learning at Apple
Apple’s iPhone and VoiceOver are some of the biggest advances in accessibility ever, but Apple never rests on its laurels. The next wave of innovation will involve what’s called “machine learning” (a subset of artificial intelligence), which uses data from sensors on the phone and elsewhere to help make sense of the world. that surrounds us. The implications for accessibility are just starting to emerge.
Jeff Bigham, Research Manager for AI / ML Accessibility at Apple
Sarah Herrlinger, Senior Director of Global Accessibility Policies and Initiatives, Apple
Moderator: Matthew Panzarino, Editor-in-Chief, TechCrunch
See AI: What Happens When You Combine Computer Vision, Lidar, and Audio AR?
The latest features of Microsoft’s Seeing AI app allow the app to recognize things in the world and place them in 3D space. Objects are literally announced from their position in the room; in other words, the word “chair” seems to emanate from the chair itself. Users can place virtual audio beacons on objects to track the location of the door, for example, and use the haptic proximity sensor to sense the outline of the room.
This is all made possible by combining the latest advances in AR, computer vision and the lidar sensor of the iPhone 12 Pro. And that’s just the beginning.
Saqib Shaikh, co-founder, Seeing AI
Moderator: Devin Coldewey, Editor-in-Chief, TechCrunch
W3C ARIA-AT: screen readers, interoperability and a new era of web accessibility
Who knew that screen readers, unlike web browsers, are not interoperable. Website developers don’t care if their code will work on Safari, Chrome, or any other browser, but if they take accessibility seriously, they need to test JAWS, VoiceOver, NVDA and the rest. That’s about to change, thanks to the W3C ARIA-AT project.
(This session will be followed on December 2 by a live breakout session with King and Fairchild, as well as several other members of the W3C ARIA-AT team.)
Matt King, Technical Accessibility Program Manager, Facebook
Mike Shebanek, Accessibility Manager, Facebook
Michael Fairchild, Senior Accessibility Advisor, Deque
Moderator: Caroline Desrosiers, Founder and CEO, Scribely
The “Saint Braille”: the development of a new tactile display combining braille and graphics in a single experience
Today, instant access to written braille is much less accessible to a blind person than it is to a sighted person. Tools such as single line updatable braille displays have been available for years, but just one line at a time gives the user a very limited reading experience. This limitation is especially felt when users read long documents or when encountering content such as tables and charts in a manual. The American Printing House for the Blind (APH) and HumanWare have teamed up to develop a device capable of rendering multiple lines of Braille and tactile graphics on the same touch surface. Currently referred to as a Dynamic Touch Device (DTD), this tool aims to provide blind users with a multi-line book reader, a touch graphics viewer and much more.
(This session will be followed by a live Q&A session with Greg Stilson, APH Global Technology Innovation Team Leader, and Andrew Flattres, HumanWare Braille Product Manager.)
Greg Stilson, Head of Global Innovation, APH
Moderator: Will Butler, Vice-President, Be My Eyes
Inland navigation: Can inertial navigation, computer vision and other new technologies work where GPS cannot?
Thanks to cell phones, GPS and navigation applications, blind or visually impaired people can move around outdoors independently. Navigating the interior is another matter.
For starters, GPS is often not available indoors. Then there are the challenges of knowing where the door is, finding the stairs, or avoiding the couch someone has moved. The combination of phone and cloud technologies such as inertial navigation, audio augmented reality, lidar and computer vision can form the basis of a solution, if product developers can map interior spaces, provide a interior positioning and provide an accessible user interface.
Mike May, Chief Evangelist, GoodMaps
Paul Ruvolo, Associate Professor of Computer Science, Olin College
Roberto Manduchi, professor of computer science, UC Santa Cruz
Moderator: Nick Giudice, Professor of Space Computing, University of Maine
Sight Tech Global December 2
Why Amazon’s vision includes talking to Alexa less
As homes become more tech-driven, inputs from multiple sources – teachable AI, multimodal understanding, sensors, computer vision and more – will create a truly ambient surround experience. Already, one in five Alexa Smart Home interactions is initiated by Alexa without any voice command. As Alexa develops an understanding of us and our home sufficient to predict our needs and act on our behalf in meaningful ways, what are the implications for accessibility?
Béatrice Geoffrin, director of Alexa Trust, Amazon
Dr Prem Natarajan, Vice President of Alexa AI, Amazon
Inventors invent: three new approaches to assistive technology
Inventors have long been inspired to apply their genius to helping blind people. Think of innovators like Mike Shebanek (VoiceOver, Apple) or Jim Fruchterman (Bookshare, Benetech), to name just two. Today innovators have an almost miraculous array of affordable technologies to work with, including lidar, computer vision, high-speed data networks and more. As a result, innovation is advancing at a breakneck pace. In this session, we’ll talk to three product innovators at the forefront of transforming these basic technologies into remarkable new tools for people who are blind or visually impaired.
Cagri Hakan Zaman, co-founder of Mediate and SuperSense
Kürşat Ceylan, co-founder, WeWalk Technology
Louis-Philippe Massé, Director of Product Management, HumanWare
Moderator: Ned Desmond, Executive Producer and Founder, Sight Tech Global
Product accessibility: how to get there? And how do you know when you have?
Accessibility awareness is on the rise, but even the best-intentioned teams can struggle when it comes to finding the right approaches. One of the keys is to work closely with the appropriate user communities to get feedback and understand the needs. The result is not a compromise but a better product for everyone. In this session, we will hear from experts on the front lines of accessibility in product development.
Christine Hemphill, Founder and CEO, Open Inclusion
Alwar Pillai, Co-Founder and CEO, Fable
Sukriti Chadha, Product Manager, Spotify
OIiver Warfield, Senior Product Manager for Accessibility
Brian Fischler, Commissioner, All Blind Fantasy Football League; humorist
Moderator: Larry Goldberg, Accessibility Manager, Yahoo
For most cell phone users, accessibility is written Android
Almost three-quarters of mobile phone users worldwide use phones built on Google’s Android operating system, not Apple’s iOS on the iPhone. For people who are blind or visually impaired, the key app is Google’s Lookout, which draws on the vast resources of Google’s AI infrastructure, including its computer vision database and Google Maps. How is Google approaching the huge accessibility opportunity that Lookout represents?
Eve Andersson, Director of Accessibility, Google
Andreina Reyna, Senior Software Engineer, Google
Warren Carr, Blind Android User Podcast
Getting around: autonomous vehicles, carpooling and those last few meters
Summoning a ride from a smartphone is a dream come true for many, but when you struggle to find that ride even a few feet away, the experience can be a nightmare, if not dangerous. How are ridesharing and autonomous taxi companies working to make the last few feet from driver to car safer and better for blind and visually impaired drivers?
Kerry Brennan, UX Research Manager, Waymo
Marco Salsiccia, Accessibility Evangelist, Lyft
Eshed Ohn-Bar, Assistant Professor, Boston University
Moderator: Bryan Bashin, CEO, LightHouse, San Francisco
Don’t forget to register for this free virtual event.
Sight Tech Global is a production of the Vista Center for the Blind and Visually Impaired. We thank current sponsors Ford, Google, Humanware, Microsoft, Mojo Vision, Facebook, Fable, APH and Vispero. If you would like to sponsor the event, contact us. All sponsorship income goes to the nonprofit Vista Center, which has served the Silicon Valley region for 75 years.