I think I'd like to see even BETTER integration of OSX and iOS's Text-to-Speech features
Not sure if too many people use this, but I personally find OSX and iOS's TTS features a godsend. Whether I'm browsing the web or writing an article, it's nice to be able to have the OS read back anything you want it to.
Some may ask "What's the point?"
For me, I tend to do a lot of multitasking and I don't always have time to read an entire article while I'm trying to focus on work. For example, when I'm editing a video and forget how to do something, I Google it and hope to at least find a video explaining how to. Unfortunately, a lot of videos also require you to continuously pay attention to the video to see and understand what's going on. The advantage of a text tutorial paired with TTS comes in handy when I'm working full screen and I need something explained while I continue working.
When driving, I don't like using my phone for anything but getting directions or changing a song on Spotify. It would be cool if on iOS, I was able to have my phone read my Pocket articles to me without me having to interact with the screen, distracting me from driving.
As for improving what's already there, I'd like to see better ways to have TTS read targeted parts of a page or article so you have control of it not reading footnotes, titles, links, or ad text. Greater in-app control for developers would let apps like Pocket and Feedly continue reading articles for you until you tell it to stop... or whatever you want.
There's only so many informative podcasts I can listen to in my car, that I wish I had a way to read all my backlogged Read It Later articles while I drive.
Apple already has a good jumping off point with the TTS they already have implemented across their devices. Even better voices and integration would be amazing.