Apple’s Sarah Herrlinger Joining AFB Board of Directors

Per a press release put out by the American Foundation for the Blind:

The American Foundation for the Blind (AFB), a national nonprofit that creates a world of no limits for people with visual impairments, today announced the election of a new, distinguished trustee to its national board: Sarah Herrlinger, Director, Global Accessibility Policy and Initiatives, at Apple, Inc. Herrlinger was elected during the February 2018 Board of Trustees meeting.

"Sarah Herrlinger is an outstanding addition to AFB's board," said Kirk Adams, AFB President and CEO. "Ms. Herrlinger and Apple's commitment to making technology accessible to everyone reflects AFB’s mission of removing barriers and creating a world of no limits as we approach our 100th year and beyond."

As I wrote on Twitter, Herrlinger acts often as the “public face” of Apple’s accessibility efforts. She’s generally the one who represents Apple at awards ceremonies and appearances, such as her attendance at SXSW last month for the Everyone Can Code program. At Apple, Lisa Jackson oversees Apple’s accessibility initiatives as part of her duties, but it’s really Herrlinger who’s one of the big drivers, internally, of new features, partnerships, and the like pertaining to accessibility across the company.

Bloomberg: Apple to Move to ARM-Based Macs in 2020

Mark Gurman and Ian King, reporting for Bloomberg:

Apple Inc. is planning to use its own chips in Mac computers beginning as early as 2020, replacing processors from Intel Corp., according to people familiar with the plans.

The initiative, code named Kalamata, is still in the early developmental stages, but comes as part of a larger strategy to make all of Apple’s devices -- including Macs, iPhones, and iPads -- work more similarly and seamlessly together, said the people, who asked not to be identified discussing private information. The project, which executives have approved, will likely result in a multi-step transition.


For Apple, the change would be a defining moment. Intel chips remain some of the only major processor components designed by others inside Apple’s product portfolio. Currently, all iPhones, iPads, Apple Watches, and Apple TVs use main processors designed by Apple and based on technology from Arm Holdings Plc. Moving to its own chips inside Macs would let Apple release new models on its own timelines, instead of relying on Intel’s processor roadmap.

As Rene Ritchie tweeted, Apple’s likely been working on this project for years.

‘For Two Months, I Got My News From Print Newspapers’

Farhad Manjoo, writing for the New York Times:

This has been my life for nearly two months. In January, after the breaking-newsiest year in recent memory, I decided to travel back in time. I turned off my digital news notifications, unplugged from Twitter and other social networks, and subscribed to home delivery of three print newspapers — The Times, The Wall Street Journal and my local paper, The San Francisco Chronicle — plus a weekly newsmagazine, The Economist.

I could never go back to reading print. As I tweeted, when I was growing up my dad used to get the paper every day, and I used to steal the sports page from him. Aside from headlines, stories were hard to read because the text was so small; I had to get so close to the paper that the tip of my nose would be covered in ink. Manjoo’s broader point about unplugging and not getting your news from Twitter—which I do, much to the chagrin of my RSS account—is well taken, but for me, technology makes consuming news a more accessible experience that helps more than hinders. I suspect I’m not alone in that sentiment either.

The Accessibility of Everyone Can Code

Lori Hawkins, reporting for 512tech:

Ober and fellow students at the Texas School for the Blind and Visually Impaired spent Wednesday morning with a team from Apple Inc. learning to code and then using the code to pilot small drones.

“It’s really cool because a lot of us really good on computers and want to create apps, but there aren’t the tools,” said Ober, 18, who is legally blind. “This technology opens up more possibilities.”

The event was Apple’s first in-school coding session for students who are blind and low vision.

The story also mentions Apple will be represented at next week’s SXSW, where they’ll discuss the Everyone Can Code initiative and how it was built to be accessible to all.

‘Close Your Rings’

Apple has a slick-looking new page on their website about closing your Activity rings.

I’ve worn an Apple Watch nearly every day since it came out in 2015, and I’ve mostly been inconsistent about closing them all. Stand is easy, but Move and especially Exercise are difficult. Since the new year began, however, I’ve completed my Move goal 58 consecutive days. I’ve become so obsessed with it—and by extension, keeping active—that I’ve incorporated a lengthy daily walk around my neighborhood. The only thing that pisses me off about this is I never close my Exercise ring, and I walk a lot every single day! (This is as much about necessity in getting around as it is staying active,) It seems like my Apple Watch doesn’t think I’m walking briskly enough or whatever, because I typically only get about a third of my recommended 30 minutes of exercise.

First Impressions of HomePod

Last May, I wrote about my experience with the Echo Dot, saying in part:

Almost 6 months later, I have to admit the Dot hasn’t even been plugged in for a while. I like the thing conceptually, but so much of its appeal and utility is tied (rightfully so) to Amazon’s ecosystem—Prime Music, audiobooks, buying, etc. Aside from my Prime membership, which I’ve had since 2011 and love, I’m just not into Amazon’s content offerings. (Prime Video is an exception.) What’s more, my house doesn’t have any smart appliances, so asking Alexa to switch on/off the lights, for example, is out of the question. In short, because I’m not a heavy Amazon user, the Echo’s appeal is limited.

As of this writing, my Echo Dot remains unused.

The then-rumored “Siri Speaker” from Apple is now a reality. Apple late last week sent me a HomePod for review, and in my first few days with the device, it already has proven more useful and enjoyable than the Echo Dot ever did. Music and podcasts sound amazing on HomePod—it is by far the best-sounding speaker I have ever heard. More to the point, however, is the fact I am entrenched in Apple’s ecosystem. I am a happy Apple Music subscriber, and I like how I have access to my library (and then some) with the HomePod. The same goes for getting iMessages, reminders, and the like. In short, the early reviewers were right: If your digital life relies on Apple hardware and software, HomePod is the smart speaker for you.

Here are a few assorted thoughts on HomePod so far.

Size & Weight. HomePod is much smaller and heavier than I anticipated. Its design reminds me a little of the 2013 Mac Pro; Apple sent me a white model and I like the look very much. In terms of weight, HomePod is the heaviest Apple product I’ve had in a long time. Granted, weight doesn’t matter when the device is sitting somewhere, but I was nonetheless struck by how heavy it is as I was unboxing it. The thing is built like a tank.

The power cord. HomePod has the nicest power cable I have ever seen. It looks and feels incredible. It strikes me as quintessential Apple: The level of attention paid to something as seemingly mundane and inconsequential as the power cord feels like something only Apple would care about. I don’t know what led to the cable being designed the way it was, but it sure is nice. I hope the company carries over this thoughtfulness to its Lightning cables and whatnot; this design deserves to live.

Sound. Phil Schiller wasn’t exaggerating at WWDC last year when he said HomePod was built to “rock the house.” As I wrote at the outset, HomePod is the best-sounding speaker I have ever used. My two favorite musical artists, Eminem and Linkin Park, sound amazing piping through HomePod. I’m no audiophile, but I can say the work Apple’s audio engineering group did here is damn impressive. HomePod’s audio quality is absolutely superb.

The Home app. The arrival of HomePod meant I had an opportunity to play with the Home app for the first time. (We don’t have any smart home appliances in our house yet.) It’s an interesting experience—it looks kinda weird with only one device set up and it’s odd how Apple gives you only two choices for wallpaper. In practice, I think integrating HomePod’s settings within the Home app make sense; I don’t think it needs a standalone app like the Watch does. There’s not much to tweak.

Accessibility. Like everything else Apple makes, HomePod is an accessible product. Apple officially lists two: VoiceOver and Touch Accommodations. If use VoiceOver on your iOS device, it is automatically turned on when you set up HomePod.

Touch Accommodations on HomePod works similarly to how it does on iOS. When enabled, the user can determine how sensitive the touch panel is to contact. For example, if you are someone with a tremor, you can tell HomePod to only react to the last part of the touch. In other words, if you touch the screen for a half-second but your finger moves elsewhere, HomePod will respond to where you end as opposed to where you started the contact.

Speech. So far, I haven’t encountered any huge issues with HomePod understanding me. The only frustrating thing is, when asking to play music, is literally saying the word “play.” In a speech and language context, the pl- sound is a hard one for me as a stutterer. I can’t get to the artist or song before Siri inevitably says she “didn’t get that” or doesn’t respond altogether. For my part, I have made a conscious effort to slow down as I’m talking in order to mitigate my stuttering, but there is a workaround. In the same way you can press and hold the side (or Home) button on an iPhone or iPad to keep Siri from interrupting, you can hold a finger down anywhere on the HomePod’s touch panel to do the same thing. You may not touch the device very often since HomePod is a voice-first device, but it’s incredibly thoughtful and a de-facto accessibility feature.

The Siri animation. One of the Echo’s best traits, accessibility-wise, is the blue ring that flashes atop the device when you start speaking to it. Apple has built in a similar animation into HomePod—when you summon Siri, the touch panel starts glowing and spinning. If you’re blind, this obviously is of no use to you, but it is a tremendous benefit if you have low vision, as I do. Siri does alert you audibly she’s listening (“Mm-hmm?”) but the animation is an important secondary cue that you’ve got Siri’s attention. This sort of bimodal sensory input is key for accessibility insofar as you needn’t rely on a single sense to know it’s working. You can hear and see HomePod is waiting for you—it may not help so much if you’re across the room, but if you’re in relatively close proximity, it’s a big help. More often than not, I look for the animation before I start speaking because it’s a concrete visual clue that it’s ready for me. I have a congenital hearing loss due to my parents being deaf, so the fact HomePod is visual in this way is huge. The animation is pretty, too.

Smart Speakers, Speech Recognition, and Accessibility

Since the HomePod started shipping last week, I’ve taken to Twitter on multiple occasions to (rightfully) rant about the inability of Siri—and its competitors—to parse non-fluent speech. By “non-fluent speech,” I’m mostly referring to stutterers because I am one, but it equally applies to others, such as deaf speakers.

This is a topic I’ve covered before. There has been much talk about Apple’s prospects in the smart speaker market; the consensus seems to be the company lags behind Amazon and Google because Alexa and Google Home are smarter than Siri. What is missing from these discussions and from reviews of these products is the accessibility of a HomePod or Echo or Sonos.

As I see it, this lack of consideration, whether intentional or not, overlooks a crucial part of a speaker product’s story. Smart speakers are a unique product, accessibility-wise, insofar as the voice-first interaction model presents an interesting set of conditions. You can accommodate for blindness and low vision with adjustable font sizes and screen readers. You can accommodate physical motor delays with switches. You can accommodate deafness and hard-of-hearing with closed captioning and using the camera’s flash for alerts.

But how do you accommodate for a speech impairment?

This is a difficult, esoteric issue. It’s hard enough to teach a machine to fluently understand normal speech patterns; teaching machines to understand a stutterer (or even accents) is a nigh impossible task. Yet it must be done—speech delays are disabilities too, and to not acknowledge the issue is to do those of us with such a disability a gross disservice. Smart speakers are effectively inaccessible otherwise, because the AI just isn’t good enough at deciphering your speech. You become so frustrated that you don’t want to use the product. The value proposition is diminished because, well, why bother? If you have to repeat yourself over and over, all the skills or SiriKit domains in the world mean shit if you can’t communicate.

To be clear, speech recognition is an industry-wide issue. I focus on Apple because I’m entrenched in the company’s ecosystem, but it isn’t their burden to bear alone. I bought an Echo Dot on a lark in late 2016 to try it out, and Alexa isn’t markedly better in this regard either. It would behoove Apple and its competition to hire speech and language pathologists for its Siri/Alexa/Google team, if they haven’t already. Such a professional would provide valuable insight into different types of speech and how to best work with them. That’s their job.

The reason I am pushing so hard on this topic is not only that I have a personal stake in the matter. The truth is voice has incredible potential for accessibility, as I reported for TechCrunch last year. For someone with physical motor disabilities, using your voice to control HomeKit, for instance, makes smart home devices infinitely more accessible and enjoyable. That’s why this perspective matters so much.

Personally, I can’t wait to try HomePod. Of course SiriKit needs to be improved with more capability. But for me, capabilities mean little if I can’t get Siri to understand my commands. Apple’s track record regarding accessibility gives me hope they’ll fare better at solving the problem. For this reason alone, I don’t believe they’re as far behind in the game as conventional wisdom says it is. I want to enjoy talking to my computers as much as anyone else, but it needs this first.

How Apple is Revamping Its Software Development Cycle

Mark Gurman, reporting for Bloomberg:

Instead of keeping engineers on a relentless annual schedule and cramming features into a single update, Apple will start focusing on the next two years of updates for its iPhone and iPad operating system, according to people familiar with the change. The company will continue to update its software annually, but internally engineers will have more discretion to push back features that aren't as polished to the following year.

Software chief Craig Federighi laid out the new strategy to his army of engineers last month, according to a person familiar with the discussion. His team will have more time to work on new features and focus on under-the-hood refinements without being tied to a list of new features annually simply so the company can tout a massive year-over-year leap, people familiar with the situation say. The renewed focus on quality is designed to make sure the company can fulfill promises made each summer at the annual developers conference and that new features work reliably and as advertised.

As John Gruber notes, Apple's shift of its development system seems to be tacit acknowledgment of the software issues of the past few years—either things are bug-ridden, or, in the case of Messages for iCloud, don't ship on time. It's likely the company's senior executives are keenly aware of what's been made about this in the media and by developers, so as Gruber also notes, Gurman's story seems to reflect that.

The only real bummer in this report, however, is the mention of some neat iPad-centric features that are being pushed back to 2019. If this comes to pass, it'll be a real buzzkill for iPad enthusiasts like myself who hoped Apple would keep its foot on the gas in terms of iPad software development. Waiting a year for more features would suck.

Considering Type to Siri on iPad Pro

Federico Viticci, in his latest iPad Diaries column for MacStories:

The beauty of Type to Siri lies in the fact that it's regular Siri, wrapped in a silent interaction method that leaves no room for misinterpretation. For the tasks Siri is good at, there's a chance Type to Siri will perform better than its voice-based counterpart because typed sentences can't be misheard.


Type to Siri likely wasn't designed with this productivity angle in mind, but the mix of a physical keyboard, system frameworks, and SiriKit turns Siri into a useful, quiet iPad sidekick that saves me a few seconds every day and lets me keep my hands on the keyboard at all times.

Type to Siri wasn't built with productivity top of mind, but as I tweeted, Federico’s piece is a prime example of accessibility for everyone.

Report: Apple Delaying Some iOS 12 Features to 2019

Ina Fried, reporting for Axios:

Apple has shaken up its iOS software plans for 2018, delaying some features to next year in an effort to put more focus on addressing performance and quality issues, Axios has learned.


Software head Craig Federighi announced the revised plan to employees at a meeting earlier this month, shortly before he and some top lieutenants headed to a company offsite.

Fried reports some of the features being pushed back to next year include a Home screen redesign and changes to CarPlay.

While this story can be interpreted as iOS having its "Snow Leopard" moment, there is an argument to be made, as Stephen Hackett does, that Apple pitched High Sierra in a similar vein—but it hasn't really lived up to that promise.

Serenity Caldwell's First Impressions of HomePod

iMore's Serenity Caldwell was amongst a group of journalists invited to NYC last week by Apple for another briefing on HomePod. Her first impressions story is great—I especially enjoyed the fun facts. One such fact is HomePod has an accelorometer so it can detect when it's being moved, and it adjusts its audio output accordingly. Serenity's been killing it lately on Twitter, dropping serious HomePod knowledge on followers who ask questions.

See also: Serenity's piece on what other audio HomePod supports besides Apple Music.

Apple Previews Upcoming iOS 11.3 Update

iOS 11.3 isn’t a major update, but it does bring a few standout features: new Animoji, a “Business Chat” feature in Messages, and the more transparent battery metrics. Most interesting to me, however, is the Health Records feature in the Health app. As I tweeted this morning, I think there’s a potential opportunity for Apple to leverage iOS’s accessibility features to make accessing and reading one’s health data a more accessible experience.

Apple Announces HomePod Pre-Orders, Ship Date

Per Apple’s press release:

HomePod, the innovative wireless speaker from Apple, arrives in stores beginning Friday, February 9 and is available to order online this Friday, January 26 in the US, UK and Australia. HomePod will arrive in France and Germany this spring.

Of note, Apple says AirPlay 2, which is needed to connect two or more HomePods together, isn’t slated to appear until “later this year.”

It’s good to see Apple announce a ship date for this product, although it’s a bummer AirPlay 2 support won’t be coming until later. As for my interest in HomePod, I’m more excited about it than I am about the Amazon Echo or Google Home. It’ll be interesting to see how the review cycle goes; my biggest area of interest in HomePod is, of course, accessibility. The story there is a crucial element to its appeal to me and others who have speech delays.

WebAIM Conducts Screen Reader Use Survey

Insightful results of WebAIM's survey on screen reader usage. The charts clearly show most blind and visually impaired users prefer Apple's VoiceOver software. These results also help reaffirm Apple's place as the leader in accessibility—their lead over Android is substantial. In a broader sense, these charts show Apple and the tech industry at large in a different light; it gives key perspective on an area that most analysts tend to ignore.

Why Stephen Hackett Bought the iMac Pro

In his column this month at iMore, Stephen writes:

The standard iMac Pro is still a lot faster than my 2015 could ever be, but then I started looking at a fully loaded 2017 5K iMac. If I opted for third-party RAM, I could pick up a 4.2GHz i7 iMac with a 1TB SSD for $3,099. With this iMac, I would still have a noticeably faster machine on my desk, but with a lot more cash in the bank.

I decided to take the conservative route, so I ordered the regular iMac. It showed up the day after Christmas. I slapped 32GB of OWC RAM in it — for a total of 40GB — and migrated my data from my trusty 2015 model.

Unfortunately, it didn't take long to realize that I had made a mistake. Even during the migration, I could hear the new iMac's fan blowing, and once I was logged in, it was even louder.

Apple Investors Write Open Letter on Children and Tech

David Gelles at the NYT has a story on an open letter by Janas Partners & Calstrs to Apple pushing the company to offer, among other things, more granular parental controls on iOS. The idea here is more parental controls will help parents better govern their children’s time with tech.

As I said to Carolina Milanesi and John Gruber on Twitter today, the onus is ultimately on educators and parents to monitor how much screen time a child gets. More granular parental controls are good, but it isn’t the responsibility of tech companies to limit how much time children have with technology. As I was taught in my early childhood development classes years back, that responsibility rests squarely on the shoulders of the adults who care for our children.

See also: Apple’s statement on this matter, per Rene Ritchie.

‘World Leaders on Twitter’

Twitter on Friday published a blog post in which the company explains why it won’t ban heads of state from its platform. Of course, “world leaders” is really a euphemism for “Donald Trump.” Twitter writes:

Blocking a world leader from Twitter or removing their controversial Tweets, would hide important information people should be able to see and debate. It would also not silence that leader, but it would certainly hamper necessary discussion around their words and actions.

What Twitter is saying is, despite Trump’s proclivity for threatening nuclear war, they won’t take action because apparently threatening war isn’t in violation of their terms of service.