The mythical iOS Dedicated High Priority UI Thread
So the main reason for this post is to try and dispel some of the BS that was posted about Android and iOS lag. It all started when a former intern, Andrew Munn, working on the Android test team decided to set the facts straight by creating a Google+ post on why there is lag on Android and why there isn't any on iOS Link.
It’s because on iOS all UI rendering occurs in a dedicated UI thread with real-time priority. On the other hand, Android follows the traditional PC model of rendering occurring on the main thread with normal priority."
The post has since been amended and corrected numerous times due to his admitted ignorance on the topic, but not before it sort of went viral.
Whenever there's a story about the Android UI, or Android in general, you can pretty much guarantee that some ignorant person is going to whip out the regurgitated fallacy that iOS has this separate high priority UI thread that makes the UI as smooth as butter. And their source of this information can either directly or indirectly be attributed to that errant post that still seems to be a work in progress due to all of the corrections it receives.
Here are some of the notable responses he received.
Matthew Chaboud, Senior Software Engineer at Avid
Unlike a number of the in-awe comments here, mine will come with a different tone:
Your post is uninformed crap.
I'm a threading guy. I've spent more than a decade eating, breathing, and sleeping optimization of threaded interactive user applications. I'm not really a phone guy, but I've dinked around with iOS and Android. Both provide facilities for background threaded render-ahead and non-allocating in-place recycling of resources. I can say, with absolute confidence, that you're suffering heavily from the Dunning-Kruger effect. You know far less about threading and rendering architectures (or, clearly, application architectures) than you think you do, and the assertion that "Android UI will never be completely smooth..." ranks as one of the most tragically idiotic technical assessments that I've seen all month (and I see some real winners).
This suggests that iOS is A) perfectly smooth and B) that way because of threading lock out. It also suggests that software rendering can't manage 60fps, or that threads at normal priority can't be smooth. I can't think of any of my iOS devices (iPhone and iPod Touch on my desk and in my bag) that is always smooth. Smoother? Rendered on a background thread to quell transient interruptions when using CoreAnimation? Sure. Perfect? Oh no. I can easily write you an iOS app that is a stuttering mess.
I've written a sub-pixel accurate software-only compositing engine that could peg 1080/60p using normal priority threads (on x86) on a system with higher priority threads running other tasks, including realtime threads. I am entirely confident that this will become possible on mobile platforms without GPU assistance (though why not use the GPU when it's there?).
I'd reply, point by point, to the big-bucket-of-wrong that you've posted here, but it's just too much effort. That you leave this post up and carry such an air of authority (even after the disclaimer) is mind-boggling. That's just one more name on the do-not-hire-this-guy list.
Bob Lee, CTO of Square and former Android core library lead
Some fallacies related to Android vs. iOS UI performance made the rounds recently, inflicting undue damage to Android's reputation. It all started with a misinformed Google+ update written by former Android testing intern Andrew Munn.I’m not an Android or iOS software engineer, so all I can say in response to any of this is that, assuming Munn’s correctly articulated the way rendering takes places on Android and iOS devices, it makes sense (but then so does the idea that Lee Harvey Oswald had help, at least to some people).
Peckham makes no mention of trying to corroborate Munn's claims with a more experienced, knowledgeable engineer, like Romain or Dianne from the Android team, nor does he reference the corrections made by iOS experts in the comments on Munn's post. A more qualified engineer would support their theories with evidence like code, specifications, and performance test results, not Reddit and Hacker News comments as Munn did.
I don't claim to have all the answers, but I can tell you that implementing fluid interfaces on both iOS and Android is time consuming and difficult. The challenges are an order of magnitude more complex than Munn suggests. I haven't had an opportunity to try Ice Cream Sandwich yet, so I can't tell you firsthand how it compares to the iPhone. However, Jason Kincaid, quoted by Munn, described ICS as quite smooth and noted that both Android and iOS stutter occasionally.
The rest of the post can be found here: Link
Jay Freeman's, iOS developer and developer of Cydia
"It’s because on iOS all UI rendering occurs in a dedicated UI thread with real-time priority. On the other hand, Android follows the traditional PC model of rendering occurring on the main thread with normal priority."
AFAIK this is simply wrong: the events that are later described as blocking rendering are coming in on the main thread, not some special "dedicated" one. The reason things block is because of the way the event loop on that thread is managed (and in fact is directly caused by all of that activity operating on the main thread, which we even often call the "UI thread"), and has nothing to do with threading, and certainly has nothing to do with "real-time priority".
"On iOS when an app is installing from the app store and you put your finger on the screen, the installation instantly pauses until all rendering is finished."
This is certainly not true. The update of the display for the installation progress might (...might) stop, as that's happening in the UI (aka, "main") thread of SpringBoard (and the event loop management might ignore incoming events that are not related to the touch event until after the gesture completes), but the installation itself is being managed by a background daemon (installd) and will not stop because someone is touching the screen. The operating system is /not/ doing something hilariously insane here, throwing out all computation on the device because someone is accidentally touching it.
Brent Royal-Gordon, iOS developer
The iOS description here isn't quite accurate. There are several things at work:
1. Compositing and previously set-up animations—all the stuff that involves the Core Animation rendering layer tree—do indeed happen on a background thread.
2. Drawing new content into Core Animation layers and setting up their animations happens on the main thread. This is the same thread that user interface actions occur on.
3. In naively written code, all developer-written code would occur on the main thread. However, Apple provides very easy APIs (Grand Central Dispatch and NSOperation) to move things into system-managed background threads. In iOS 5, you can even declare that a Core Data (object-relational database) context cannot be used directly on the main thread.
All that stuff you noticed—the way images aren't drawn into lists while you're scrolling, the way WebKit rendering stops when the system is tracking a touch—isn't inherently built-in by a mechanism that pauses the world when a finger is on the screen.* It's deliberate behavior painstakingly implemented by the developer of each individual app.
This is not a technical difference; it's a cultural difference. Good iOS developers don't ship software until it runs at something near 60 fps while scrolling and tracks touches almost perfectly; good Android developers do.
Chi-Ho Kwok, Android Developer
It's not the only thing that's wrong in the post, but I guess I'll take on a few more points. Threading and priority is the #1 issue everywhere, but in a world of broken and half broken apps, I don't expect everyone to do it right unless you force them.
Note that while the "UI thread" on iOS and WP7 may look like a real time thread, it's just another thread in the system, but one that stops everything else. It "fixes" the problem, but in my opinion, it's undesirable.
For the rest of the points:
"second": no, not really. Even if an app GC's frequently, it's a 50ms pause on ancient devices, or 3 skipped frames, and less on the hardware mentioned. It won't be laggy, as in, visible jerky animations with 10 or less FPS, lasting seconds. Capping in the gallery is probably done so background threads have breathing room to load data and catch up. If we would do if your way, it means we'll be scrolling a huge list of grey rectangles and never show anything while the finger is on the screen or if you keep flinging. The Windows phone 7's gallery does it right - it loads pictures faster than you can flick while having a butter smooth UI. It's the best, but as that's not available, I'll take a 30 FPS gallery over the HTC gallery any day. In my experience, we can fetch a hundred rows from the DB, do postprocessing and setup the views before a 250ms animation is over. In your world, the user would have to wait longer to see any useful data on the screen.
"third": Yes, hardware differs. No, it doesn't matter for rendering UI's and smoothness. Most of the laggy apps are beyond redemption, and a faster canvas implementation by using NEON (http://www.google.com/codesearch#search&q=NEON+package:http://skia\.googlecode\.com) won't save it.
"forth": composition is defined by me, the programmer. I can tell android to do composition via bitmaps and off screen buffers like iOS does, or just draw these 4 lines and a string directly on the target frame buffer. Both methods have pros and cons, and I use both depending on the view. Composition is done by the GPU since Honeycomb, but as I can hit a solid 60 FPS on a layout with 3x 200+ View elements on ancient hardware, so it's basically PEBCAK. If you must cheat more because the view is even more complex, just draw it into a bitmap and show that instead. Voila, the iOS way of animation. It's just one call to setDrawingCacheEnabled, but it isn't done by default because changing content inside that off screen buffer means drawing it twice, once to the bitmap and then to the screen, or even worse if it's deeply nested.
"fifth": Swing has nothing to do with this, but calling it slow "because it's a cross platform layer" shows ignorance. It uses Direct3d for composition and off screen buffers on the GPU for rendering, and it's a whole lot more manageable than win32. It doesn't have weird issues like deeply nesting panels causes resizes to go wrong (http://news.jrsoftware.org/news/toolbar2000/msg07779.html, a huge problem in Adobe reader 9.x). It's interesting that you mention windows phone, because it works just the same like Android: a native .net CLR running bytecodes above it.
I agree that Android's laggyness will never fix itself, but for a different reason. If you allow programmers to slack and do stuff on the UI thread, they will. There's no fix for that.