Speaking of Global Accessibility Awareness Day, how fortutious that the first episode of my new (again) podcast about accessibility in tech dropped on the same day. In Accessible's first new episode since 2014, me and Timothy discuss what accessibility is and what it means, the history of Apple's accessibility work, how the first iPhone redefined accessible computing, and more.
Yesyerday was Global Accessibility Awareness Day. Apple, as it has done the last few years, was an active participant in promoting the day—a day all about raising awareness for disabled people and the importance of accessible technology. Being the industry leader, Apple plays a big role.
For TechCrunch, I wrote a deep dive story on Apple's activities for this year, tied into the education announcements they made in Chicago in late March. Apple invited me to a small event at the California School for the Deaf in Fremont, which was the highlight of the day. It was at that event where I got an opportunity to briefly interview CEO Tim Cook about GAAD and what accessibility means to the company and to him. The reporting was key, but man, how exhilirating it was for me. It was an amazing experience—one that I won't ever forget.
Rachelle Hampton, writing for Slate:
The fact that analog clocks have managed to stand the test of time in an increasingly digitized world is a bit of a wonder. While there may be nothing quite as charming as the quiet tick of a watch, the fact that teachers are adapting to the fact that a fair number of teenagers can’t read an analog clock isn’t another sign of corruption in The Youth. It means, as Nobel Prize Winner Bob Dylan once wrote, that times are a-changing. It doesn’t make sense for a teacher to waste precious class time to teach teenagers how to read an analog clock just so they can tell time during an exam. And if students can’t read traditional clock faces with relative ease by the time they’re sitting for exams, it means one of two things: Somewhere down the line their teacher prioritized other knowledge over doing countless worksheets on telling time, or they’ve just forgotten something they learned in elementary school once they tested out of that grade.
Kids in America who learn under Common Core standards are required to be taught how to read an analog clock in first or second grade; if kids in the U.K. are taught around the same time than it wouldn’t be entirely surprising to learn that they forgot how to do so by the time they’re teenagers. With the sheer amount of stuff kids are expected to know once state exams roll around, it’s understandable that they’d get a bit rusty on something they’re not using every day.
One of my saved watch faces on my Apple Watch is one of the analog ones. I like to think using an "old" way to tell time on a 21st century computer on my wrist is a bit whimsical, and it's cool. I was born in 1981, so I grew up with analog clocks; I know how to tell time. Digital clocks are fine too, but I'm comfortable reading the clock's hands.
One amusing aspect to analog clocks versus digital ones, at least in my experience, is how young people (read: teens and twentysomethings) don't understand "approximate" time. For example, if someone asks me what time it is and it's 3:15, more often than not I'll say it's "quarter after." Likewise, if it's 3:45, I'll say it's "quarter to." When I use these phrases, it seems to throw off people—I think most people nowadays are used to exact numbers. To them, it's not "ten to 4," it's "3:50." Digital timepieces have set the expectation for getting exact numbers as opposed to abstract (albeit correct) phrases.
Maybe I'm showing my age. My grandma told time this way, so I guess I picked it up subsconsciously.
The show art is courtesy of Simon “forgottentowel” Buckmaster, who’s done artwork for Relay FM.
At long last, it returns. I’m excited to announce the resurrected Accessible podcast.
First, a quick history lesson. The show’s original run dates back to 2013–2014, a time when my career in tech journalism was in its infancy. The show’s premise was simple: A (then) weekly podcast about accessibility in tech, mostly seen through an Apple-focused lens. Accessible was part of the now-defunct Constellation FM podcast network, and it had a good but brief run. Apple featured it in iTunes for a while under “New & Noteworthy,” and we even had sponsors! Squarespace was a sponsor at one point; I still have the old ad read as a text file stored in my Dropbox. And there were guests too—Stephen Hackett and Jared Sinclair were two notable ones.
Accessible’s original run marked my first foray into the world of podcasting, and I enjoyed doing it very much. As time went on, however, life got in the way of me and my co-host, so our work on the show waned to the point it ceased to exist. We just stopped, and unfortunately lost touch with each other altogether in the process. In fact, none of the audio exists anywhere online today. Since it ended four years ago, my career has blossomed and I’ve grown a nice audience. Most of my friends and colleagues podcast regularly, and as a podcast lover, I’ve listened to countless hours of their shows over time. It eventually reached a point where, coinciding with my burgeoning writing career, that I began to really miss podcasting. More to the point, I increasingly felt I should be podcasting—I should augment my written work on Apple accessibility by talking about it too. So, after much procrastination and preparation, here I am.
Rather, here we are. I’m doing Accessible with my friend Timothy Buck.
Tim is a product manager who, like me, lives in San Francisco. We’ve been friends several years; we’ve had coffee many times, over which we had many discussions about rebooting Accessible. He’s a great guy and I couldn’t be more excited to be working with him on the show. As one friend recently told me, I “picked the right partner... he’s an amazing dude.”
Round 2 of the show has the exact same premise as before: Accessibility in tech, with an Apple bend. This time, though, we’ll be doing the show fortnightly... so listeners will get two episodes per month. And like the old show, we’re planning on having guests—we have an exciting, ever-growing list in Notes of people we’d like to have on the show.
In a broad sense, I think bringing back Accessible is important for the Apple podcasting scene. While there are accessibility-minded podcasts out there, I’m frustrated at the general lack of discussion about it on most shows, including all of the “AAA” shows most Apple nerds are familiar with. To be clear: This isn’t me being critical of the people doing the shows; I know everyone doing them and they’re all smart as hell. The issue is simply accessibility is too important a topic not to cover, even though it’s admittedly abstract and hard to understand in places. It’s difficult to talk about something you can’t easily relate to. Still, it’s a subject deserving of our attention. Accessibility is a major facet of all Apple products, and certainly has relevance to the rumor mill as well when ruminating on potential future products.
Put another way: The Apple community needs to talk about accessibility more than it does.
Thus, our hope for Accessible is to bring some of that conversation to the forefront. As a disabled person and as someone who is looked upon as the “expert” on this topic, I’m hoping that, if we do it right, Accessible can be a trusted source of coverage and analysis of Apple-related topics from a different perspective. To diversify the conversation is, I believe, to enrich the conversation. We hope the conversations we have on the podcast help listeners see Apple (and other companies) in a different light than usual.
As of this writing, we’ve recorded an Episode 0 and will be recording the first real episode shortly. We’re excited about the early enthusiasm for the show, and we’re excited to get to work and get the show off the ground and see where it goes. We hope you’ll join us on our journey. Any questions or comments can be directed to us on Twitter or via email. We would love to hear from you on how we’re doing!
In celebrating Autism Acceptance Month, this is a terrific project:
Over the last few weeks, participants have been creating new forms of art on iPad Pro. Though some had previous experience with iPad, most artists had not previously worked with Apple Pencil to create art. Through one-on-one sessions at Apple stores, drawing apps, and other support from Apple these artists gained confidence with the new method and learned how to maximize their creative potential. The participants have differing abilities, and are of different genders and ages ranging from 15 to 53.
The Art of Autism is very excited to lead this series for artists of all abilities. We are grateful to Apple for their support and to the artists for sharing their art and insights. The Created on iPad art exhibit highlights the diversity of art and the creativity of many on the autism spectrum in visual art — and how technology can enhance their experience.
Though I am a fan of the Mac, I don't think there's any question that, generally speaking, an iPad is a far more accessible computer than something like a MacBook. This exhibit is one reason why.
For many years, I was a diehard Tweetbot user, the popular indie Twitter client for iOS and the Mac. Everything about the app’s design delighted me: I adored the sounds, the way fonts were rendered, and just the way the user interface looked. It was—and remains—a beautiful app. Tapbots, Tweetbot’s developer, does exquisite work. It’s no wonder the app is so beloved.
Somewhere along the way, however, I moved away from Tweetbot and gravitated towards Twitter’s official first-party app. I don’t exactly know why I made the change, but I soon came to realize that the official client is actually well done in its own right. Despite its proclivity for inserting ads and promoted tweets into the timeline, among other annoyances, I can appreciate the niceties the official app offers such as the Search tab, Moments, threaded replies, and—best of all—the dedicated GIF button.
And one other thing: Accessibility.
Many nerds like to shit on the official Twitter app for being an abomination, mainly due to the algorithmic timeline, but the truth is Twitter’s iOS client is worlds better than Tweetbot for accessibility. The UI design is much higher contrast—Twitter for iOS even acknowledges when you have the system’s Increase Contrast setting enabled, as I do. And, crucially, the official client natively supports alt-text, which allows users to append image descriptions for the blind and low vision before tweeting.
I’ve spent the last several weeks back on Tweetbot, because I still have great fondness and respect for it. More to the point, however, I wanted to revisit the app and see how it compares to the official app. I’ve identified a few bullet point enhancements Tapbots that would greatly increase its accessibility. Such as:
- Higher contrast iconography and support for Increase Contrast
- Support for adding alt-text to images prior to tweeting
- And just for fun, add an integrated GIF button
One thing Tweetbot does extremely well, accessibility-wise, is with sounds. The sounds you hear when a new tweet is sent or retweeted or a mention comes in, are helpful audio cues that the action in question occurred. This creates a bimodal sensory experience where someone like me, who has low vision, can see and hear my interactions with the app. The sounds are undoubtedly a nod to Tapbots’ robot-y branding, but in reality they also are an unintentional assistive tool and it’s great.
I have no idea where Tapbots is in development on version 5, but wherever they stand, I hope they consider improving the app for accessibility. It’d make it even more delightful for me (and others) who rely on accessibility to get the most from our apps.
TechCrunch’s Matthew Panzarino was given exclusive access by Apple to what the company calls their “Pro Workflows Group.” This is the team who are working on the redesigned Mac Pro, which Apple confirmed to Panzarino is slated for release next year.
Now, it’s a year later and Apple has created a team inside the building that houses its pro products group. It’s called the Pro Workflow Team, and they haven’t talked about it publicly before today. The group is under John Ternus and works closely with the engineering organization. The bays that I’m taken to later to chat about Final Cut Pro, for instance, are a few doors away from the engineers tasked with making it run great on Apple hardware.
“We said in the meeting last year that the pro community isn’t one thing,” says Ternus. “It’s very diverse. There’s many different types of pros and obviously they go really deep into the hardware and software and are pushing everything to its limit. So one thing you have to do is we need to be engaging with the customers to really understand their needs. Because we want to provide complete pro solutions, not just deliver big hardware, which we’re doing and we did it with iMac Pro. But look at everything holistically.”
To do that, Ternus says, they want their architects sitting with real customers to understand their actual flow and to see what they’re doing in real time. The challenge with that, unfortunately, is that though customers are typically very responsive when Apple comes calling, it’s not always easy to get what they want because they may be using proprietary content. John Powell, for instance, is a long-time logic user and he’s doing the new Star Wars Han Solo standalone flick. As you can imagine, taking those unreleased and highly secret compositions to Apple to play with on their machines can be a sticking point.
This story is a terrific follow-up to last year’s Mac roundtable one.
Per a press release put out by the American Foundation for the Blind:
The American Foundation for the Blind (AFB), a national nonprofit that creates a world of no limits for people with visual impairments, today announced the election of a new, distinguished trustee to its national board: Sarah Herrlinger, Director, Global Accessibility Policy and Initiatives, at Apple, Inc. Herrlinger was elected during the February 2018 Board of Trustees meeting.
"Sarah Herrlinger is an outstanding addition to AFB's board," said Kirk Adams, AFB President and CEO. "Ms. Herrlinger and Apple's commitment to making technology accessible to everyone reflects AFB’s mission of removing barriers and creating a world of no limits as we approach our 100th year and beyond."
As I wrote on Twitter, Herrlinger acts often as the “public face” of Apple’s accessibility efforts. She’s generally the one who represents Apple at awards ceremonies and appearances, such as her attendance at SXSW last month for the Everyone Can Code program. At Apple, Lisa Jackson oversees Apple’s accessibility initiatives as part of her duties, but it’s really Herrlinger who’s one of the big drivers, internally, of new features, partnerships, and the like pertaining to accessibility across the company.
Mark Gurman and Ian King, reporting for Bloomberg:
Apple Inc. is planning to use its own chips in Mac computers beginning as early as 2020, replacing processors from Intel Corp., according to people familiar with the plans.
The initiative, code named Kalamata, is still in the early developmental stages, but comes as part of a larger strategy to make all of Apple’s devices -- including Macs, iPhones, and iPads -- work more similarly and seamlessly together, said the people, who asked not to be identified discussing private information. The project, which executives have approved, will likely result in a multi-step transition.
For Apple, the change would be a defining moment. Intel chips remain some of the only major processor components designed by others inside Apple’s product portfolio. Currently, all iPhones, iPads, Apple Watches, and Apple TVs use main processors designed by Apple and based on technology from Arm Holdings Plc. Moving to its own chips inside Macs would let Apple release new models on its own timelines, instead of relying on Intel’s processor roadmap.
As Rene Ritchie tweeted, Apple’s likely been working on this project for years.
Farhad Manjoo, writing for the New York Times:
This has been my life for nearly two months. In January, after the breaking-newsiest year in recent memory, I decided to travel back in time. I turned off my digital news notifications, unplugged from Twitter and other social networks, and subscribed to home delivery of three print newspapers — The Times, The Wall Street Journal and my local paper, The San Francisco Chronicle — plus a weekly newsmagazine, The Economist.
I could never go back to reading print. As I tweeted, when I was growing up my dad used to get the paper every day, and I used to steal the sports page from him. Aside from headlines, stories were hard to read because the text was so small; I had to get so close to the paper that the tip of my nose would be covered in ink. Manjoo’s broader point about unplugging and not getting your news from Twitter—which I do, much to the chagrin of my RSS account—is well taken, but for me, technology makes consuming news a more accessible experience that helps more than hinders. I suspect I’m not alone in that sentiment either.
Lori Hawkins, reporting for 512tech:
Ober and fellow students at the Texas School for the Blind and Visually Impaired spent Wednesday morning with a team from Apple Inc. learning to code and then using the code to pilot small drones.
“It’s really cool because a lot of us really good on computers and want to create apps, but there aren’t the tools,” said Ober, 18, who is legally blind. “This technology opens up more possibilities.”
The event was Apple’s first in-school coding session for students who are blind and low vision.
The story also mentions Apple will be represented at next week’s SXSW, where they’ll discuss the Everyone Can Code initiative and how it was built to be accessible to all.
Fascinating piece on Medium by the Academy of Motion Picture Arts and Sciences:
On the evening of May 16, 1929, Academy members and their guests, 270 in total, filled the Blossom Room at the Roosevelt Hotel for a banquet featuring the presentation of the Academy’s Merit Awards. They dined on fillet of sole, half a broiled chicken on toast, string beans and potatoes. And they all knew the winners.
My favorite tidbit from this article: The ceremony lasted only 20 minutes!
Apple has a slick-looking new page on their website about closing your Activity rings.
I’ve worn an Apple Watch nearly every day since it came out in 2015, and I’ve mostly been inconsistent about closing them all. Stand is easy, but Move and especially Exercise are difficult. Since the new year began, however, I’ve completed my Move goal 58 consecutive days. I’ve become so obsessed with it—and by extension, keeping active—that I’ve incorporated a lengthy daily walk around my neighborhood. The only thing that pisses me off about this is I never close my Exercise ring, and I walk a lot every single day! (This is as much about necessity in getting around as it is staying active,) It seems like my Apple Watch doesn’t think I’m walking briskly enough or whatever, because I typically only get about a third of my recommended 30 minutes of exercise.
Almost 6 months later, I have to admit the Dot hasn’t even been plugged in for a while. I like the thing conceptually, but so much of its appeal and utility is tied (rightfully so) to Amazon’s ecosystem—Prime Music, audiobooks, buying, etc. Aside from my Prime membership, which I’ve had since 2011 and love, I’m just not into Amazon’s content offerings. (Prime Video is an exception.) What’s more, my house doesn’t have any smart appliances, so asking Alexa to switch on/off the lights, for example, is out of the question. In short, because I’m not a heavy Amazon user, the Echo’s appeal is limited.
As of this writing, my Echo Dot remains unused.
The then-rumored “Siri Speaker” from Apple is now a reality. Apple late last week sent me a HomePod for review, and in my first few days with the device, it already has proven more useful and enjoyable than the Echo Dot ever did. Music and podcasts sound amazing on HomePod—it is by far the best-sounding speaker I have ever heard. More to the point, however, is the fact I am entrenched in Apple’s ecosystem. I am a happy Apple Music subscriber, and I like how I have access to my library (and then some) with the HomePod. The same goes for getting iMessages, reminders, and the like. In short, the early reviewers were right: If your digital life relies on Apple hardware and software, HomePod is the smart speaker for you.
Here are a few assorted thoughts on HomePod so far.
Size & Weight. HomePod is much smaller and heavier than I anticipated. Its design reminds me a little of the 2013 Mac Pro; Apple sent me a white model and I like the look very much. In terms of weight, HomePod is the heaviest Apple product I’ve had in a long time. Granted, weight doesn’t matter when the device is sitting somewhere, but I was nonetheless struck by how heavy it is as I was unboxing it. The thing is built like a tank.
The power cord. HomePod has the nicest power cable I have ever seen. It looks and feels incredible. It strikes me as quintessential Apple: The level of attention paid to something as seemingly mundane and inconsequential as the power cord feels like something only Apple would care about. I don’t know what led to the cable being designed the way it was, but it sure is nice. I hope the company carries over this thoughtfulness to its Lightning cables and whatnot; this design deserves to live.
Sound. Phil Schiller wasn’t exaggerating at WWDC last year when he said HomePod was built to “rock the house.” As I wrote at the outset, HomePod is the best-sounding speaker I have ever used. My two favorite musical artists, Eminem and Linkin Park, sound amazing piping through HomePod. I’m no audiophile, but I can say the work Apple’s audio engineering group did here is damn impressive. HomePod’s audio quality is absolutely superb.
The Home app. The arrival of HomePod meant I had an opportunity to play with the Home app for the first time. (We don’t have any smart home appliances in our house yet.) It’s an interesting experience—it looks kinda weird with only one device set up and it’s odd how Apple gives you only two choices for wallpaper. In practice, I think integrating HomePod’s settings within the Home app make sense; I don’t think it needs a standalone app like the Watch does. There’s not much to tweak.
Accessibility. Like everything else Apple makes, HomePod is an accessible product. Apple officially lists two: VoiceOver and Touch Accommodations. If use VoiceOver on your iOS device, it is automatically turned on when you set up HomePod.
Touch Accommodations on HomePod works similarly to how it does on iOS. When enabled, the user can determine how sensitive the touch panel is to contact. For example, if you are someone with a tremor, you can tell HomePod to only react to the last part of the touch. In other words, if you touch the screen for a half-second but your finger moves elsewhere, HomePod will respond to where you end as opposed to where you started the contact.
Speech. So far, I haven’t encountered any huge issues with HomePod understanding me. The only frustrating thing is, when asking to play music, is literally saying the word “play.” In a speech and language context, the pl- sound is a hard one for me as a stutterer. I can’t get to the artist or song before Siri inevitably says she “didn’t get that” or doesn’t respond altogether. For my part, I have made a conscious effort to slow down as I’m talking in order to mitigate my stuttering, but there is a workaround. In the same way you can press and hold the side (or Home) button on an iPhone or iPad to keep Siri from interrupting, you can hold a finger down anywhere on the HomePod’s touch panel to do the same thing. You may not touch the device very often since HomePod is a voice-first device, but it’s incredibly thoughtful and a de-facto accessibility feature.
The Siri animation. One of the Echo’s best traits, accessibility-wise, is the blue ring that flashes atop the device when you start speaking to it. Apple has built in a similar animation into HomePod—when you summon Siri, the touch panel starts glowing and spinning. If you’re blind, this obviously is of no use to you, but it is a tremendous benefit if you have low vision, as I do. Siri does alert you audibly she’s listening (“Mm-hmm?”) but the animation is an important secondary cue that you’ve got Siri’s attention. This sort of bimodal sensory input is key for accessibility insofar as you needn’t rely on a single sense to know it’s working. You can hear and see HomePod is waiting for you—it may not help so much if you’re across the room, but if you’re in relatively close proximity, it’s a big help. More often than not, I look for the animation before I start speaking because it’s a concrete visual clue that it’s ready for me. I have a congenital hearing loss due to my parents being deaf, so the fact HomePod is visual in this way is huge. The animation is pretty, too.
Since the HomePod started shipping last week, I’ve taken to Twitter on multiple occasions to (rightfully) rant about the inability of Siri—and its competitors—to parse non-fluent speech. By “non-fluent speech,” I’m mostly referring to stutterers because I am one, but it equally applies to others, such as deaf speakers.
This is a topic I’ve covered before. There has been much talk about Apple’s prospects in the smart speaker market; the consensus seems to be the company lags behind Amazon and Google because Alexa and Google Home are smarter than Siri. What is missing from these discussions and from reviews of these products is the accessibility of a HomePod or Echo or Sonos.
As I see it, this lack of consideration, whether intentional or not, overlooks a crucial part of a speaker product’s story. Smart speakers are a unique product, accessibility-wise, insofar as the voice-first interaction model presents an interesting set of conditions. You can accommodate for blindness and low vision with adjustable font sizes and screen readers. You can accommodate physical motor delays with switches. You can accommodate deafness and hard-of-hearing with closed captioning and using the camera’s flash for alerts.
But how do you accommodate for a speech impairment?
This is a difficult, esoteric issue. It’s hard enough to teach a machine to fluently understand normal speech patterns; teaching machines to understand a stutterer (or even accents) is a nigh impossible task. Yet it must be done—speech delays are disabilities too, and to not acknowledge the issue is to do those of us with such a disability a gross disservice. Smart speakers are effectively inaccessible otherwise, because the AI just isn’t good enough at deciphering your speech. You become so frustrated that you don’t want to use the product. The value proposition is diminished because, well, why bother? If you have to repeat yourself over and over, all the skills or SiriKit domains in the world mean shit if you can’t communicate.
To be clear, speech recognition is an industry-wide issue. I focus on Apple because I’m entrenched in the company’s ecosystem, but it isn’t their burden to bear alone. I bought an Echo Dot on a lark in late 2016 to try it out, and Alexa isn’t markedly better in this regard either. It would behoove Apple and its competition to hire speech and language pathologists for its Siri/Alexa/Google team, if they haven’t already. Such a professional would provide valuable insight into different types of speech and how to best work with them. That’s their job.
The reason I am pushing so hard on this topic is not only that I have a personal stake in the matter. The truth is voice has incredible potential for accessibility, as I reported for TechCrunch last year. For someone with physical motor disabilities, using your voice to control HomeKit, for instance, makes smart home devices infinitely more accessible and enjoyable. That’s why this perspective matters so much.
Personally, I can’t wait to try HomePod. Of course SiriKit needs to be improved with more capability. But for me, capabilities mean little if I can’t get Siri to understand my commands. Apple’s track record regarding accessibility gives me hope they’ll fare better at solving the problem. For this reason alone, I don’t believe they’re as far behind in the game as conventional wisdom says it is. I want to enjoy talking to my computers as much as anyone else, but it needs this first.
Mark Gurman, reporting for Bloomberg:
Instead of keeping engineers on a relentless annual schedule and cramming features into a single update, Apple will start focusing on the next two years of updates for its iPhone and iPad operating system, according to people familiar with the change. The company will continue to update its software annually, but internally engineers will have more discretion to push back features that aren't as polished to the following year.
Software chief Craig Federighi laid out the new strategy to his army of engineers last month, according to a person familiar with the discussion. His team will have more time to work on new features and focus on under-the-hood refinements without being tied to a list of new features annually simply so the company can tout a massive year-over-year leap, people familiar with the situation say. The renewed focus on quality is designed to make sure the company can fulfill promises made each summer at the annual developers conference and that new features work reliably and as advertised.
As John Gruber notes, Apple's shift of its development system seems to be tacit acknowledgment of the software issues of the past few years—either things are bug-ridden, or, in the case of Messages for iCloud, don't ship on time. It's likely the company's senior executives are keenly aware of what's been made about this in the media and by developers, so as Gruber also notes, Gurman's story seems to reflect that.
The only real bummer in this report, however, is the mention of some neat iPad-centric features that are being pushed back to 2019. If this comes to pass, it'll be a real buzzkill for iPad enthusiasts like myself who hoped Apple would keep its foot on the gas in terms of iPad software development. Waiting a year for more features would suck.
Of note for accessibility, Apple calls out two features in the “Control HomePod” section. VoiceOver is enabled automatically on a HomePod if your iOS device uses it; and you can use Touch Accommodations in the Home app to adjust the sensitivity of the touch panel on top of the speaker.
My friend Timothy Buck has a new podcast launching today, Unco, and I was honored to be on one of the first episodes. We discussed my career in tech journalism, voice-first devices such as HomePod, and the importance of thinking about accessibility in tech.