1499 words on Mac OS X 10.5 Leopard
Many people came along and read the recent post on the X.5 Menu Bar. A number of them left comments. Some of the comments were less than helpful and limited themselves to stating that the menus look just fine to the commenter. Or helpfully pointing out that using an enlarged image to illustrate the non-subpixel anti-aliasing in the X.5 menu bar just blows the problem out of proportion.
Note that this problem can be officially worked around since the MacOS X.5.2 update which gives you the option to turn menu bar transparency off in the Desktop preferences. Using the non-transparent menu bar will restore subpixel anti-aliasing.
Those really got me. Shouldn’t it be obvious that if you don’t see the problem, that doesn’t mean it doesn’t exist? And do I really need to state explicitly that I provide an enlarged screenshot for the convenience of my readers? After all, to the untrained eye, it can be difficult to spot the difference between greyscale anti-aliasing and the subpixel kind. So I thought, an image would be helpful, as I am always annoyed by websites which claim ‘facts’ and then leave the work of verifying them to their readers.
Anyhow. The question may be how I spotted this situation. I may be mentally deranged but I’m still not the type of person who checks every pixel of his screen at 16× magnification just so he can scream ‘hello, odditiy!’. I just reported the main flaw of the menu bar’s look in X.5 – its pseudo-transparency – and thought I should add the anti-aliasing oddness that seems to be related it. I noticed the anti-aliasing difference because, well, I do see the difference between the two strings below without needing magnification:
To me, the diagonals in one of them look jagged. And so do some of the curves. If people don’t see the difference – great for them! – but they shouldn’t be setting the standards for how characters are rendered (or get glasses or sell their CRT screen). To me these diagonals look unnecessarily harsh. And a valid question may be whether this is a problem inherent to greyscale anti-aliasing or to Apple’s algorithm for doing it.
In fact, coming to think about the topic after following the discussion on Pierre Igot’s post on my post (err, circle-jerk anyone?), I had to conclude that it’s likely that subpixel anti-aliasing will probably be a clever little technology that is not here to stay. To begin with, it’s just a hack that takes advantage of the way many screens work today. Once screens start being built in other ways, it is going to be as useless. And I suppose it already is useless in setups with rotatable screens or multi-screen arrangements with differing screen-types today.
Subpixel anti-aliasing works by using the exact position of the red, green and blue subpixels of each pixel to control the size and width the of glyphs it draws by not just up to a pixel but up to a third of a pixel. This increases the horizontal resolution by a factor of three and occasionally gives glyphs a slightly coloured flare on their sides. The following simple text of vertical bar characters in Lucida Grande 14 should make this very clear:
Taking a closer looks also reveals another fact: If you don’t force glyphs to all start exactly at an integer position (which OS X’s text layout engine doesn’t do) but you allow fractional positions as well, the exact rendering in pixels of a glyph will depend on its position on screen:
In this simple screenshot alone, there are at least three different renderings of the same glyph. Now start thinking about how this as an effect on font rendering: Previously, the system could just render each glyph once to a bitmap and then re-use that every time the same glyph appeared again. This just became much more complex. But try to think a step further: What if the text is moved on screen? We’ll have to limit that to integer steps unless we are willing to redraw everything. And what about rotating the text? Surely we’ll need a newly anti-aliased bitmap for each angle.
The same is true for scaling. This can be easily seen by minimising the window ‘linearly’ to the Dock:
What’s happening when a window is minimised to the Dock is simply that the operating system uses the current bitmap of the window in question and transforms it to all the intermediate shapes. As you can see when minimising playing films or by halting the minimisation, the window’s content remains ‘live’ throughout this. However, what seems to happen is that the window keeps drawing itself to its original size and shape and the windowing system subsequently does the necessary transformations. That is, the final transformation is independent of the window’s drawing. Which, I suppose, is a nice conceptual separation. It’s all neat and clear.
It does mean, though, that we are transforming bitmaps on the screen the exact form of which is not invariant under their position or under their scaling factor (let alone rotation of or other transformations). And thus, we see ugly bars – in one instance almost a duplication – in the screenshots above. That doesn’t matter in practice for the quick minimisation animation but it is not a good thing, conceptually, and it makes clear that freeform transformation of screen contents and subpixel anti-aliasing just don’t go together.
Now let’s look at Apple and where their graphics technologies are heading: They introduced Quartz Composer in Mac OS X.4. Quartz Compositions can draw text. And they never supported subpixel anti-aliasing. But keeping in mind that Quartz Composer works in a way where you create a string first, then pass it around to manipulate it, and finally draw it on screen, you will quickly recognise that it would be quite difficult to achieve working subpixel rendering in that case: The patch rendering the string to a bitmap would need to know about how and where the string is eventually rendered on screen. Which is impossible because of the simple way Quartz Composer works where there is a strictly directed flow of the rendering.
And if you start thinking about this a little, I think you will conclude that even making Quartz Composer more complex, say, by letting the text renderer know about the transformations, will just create more problems than it solves. Not only would it ruin the nice simplicity of the technology, it would also be impossible to handle all the possible outcomes of transforming rendered text with subpixel anti-aliasing (think filters that are more complex than affine transforms or even user-written distortion filters) and thus such an ‘improved’ text rendering patch would still have to revert to greyscale anti-aliasing in those situations… Meaning we’d end up with a vastly more complex techology that would provide little to no benefit.
The next chapter in Mac OS X graphics technologies is Core Animation. One of its strengths is that you can put your application’s views into a ‘layer’ and move them around freely and with little effort. [Admittedly you could already do some affine transforms in traditional Cocoa views. But Core Animation seems significantly more powerful.] I don’t know the internals of Core Animation, but if you imagine that this has been designed for the future, to lift graphics handling to a new level, giving you all sorts of transformations on multiple screens while being abstract enough to be easy to use and not interact in complex ways with application, then you’ll conclude that either Apple’s engineers had to put an insane effort into this to get sub-pixel anti-aliasing to work (in a situation not quite unlike in complexity to that of Quartz Composer) or they just dropped it.
The economical decision here seems to be the latter. And a simple example made with Interface Builder, containing a string redered by a traditional Cocoa text view and another one redered in a view that’s in a Core Animation ‘Layer’ confirms this suspicion:
Or in magnification:
I suppose this will make it easy to get a hint at where applications, which aren’t designed carefully enough to put all their controls on layers, are using Core Animation (the ugly huge switch in the Time Machine control panel would be my first guess). And perhaps the lack of subpixel anti-aliasing in X.5’s menu bar is owed to this as well.
To return to my initial point: I don’t mind the non-subpixel anti-aliasing per se. What I do mind is anti-aliasing that doesn’t look as good as the subpixel kind. Or, put more simply: I do mind jaggy glyphs in my menu bar. Having a ‘good technical reason’ for them looking jaggy doesn’t make them look better to me.
I’m simple-minded in that way. But I’ll stop complaining if Apple update my MacBook to a 200dpi screen or whatever it takes to make all the worrying about subpixel anti-aliasing superfluous.
In OS X’s imaging model all graphic elements are “drawn” first, which may be vector graphics, bitmaps, transformations, text glyphs et al, and then it is “rendered” to the screen. So basically text is not drawn individually at each position, rather it is all “dithered” by view to the screen’s current size/shape/resolution settings.
Dear Sir or Madam Quater Life Cirsis, What you observe here is Apple’s decision that was executed around OSX, to either use anti-aliasing or hints, but not both. I agree with you, and have previously stated widely to that Apple’s only recourse at this point is to 2x or 3x their screen resolution. The only source of such screens is busy making little ones, and big ones are not likely to come swarming out of China soon. So, if you want to know what you are looking for, tell people it’s the hints gone missing and ask Apple to put them back.
Wait — I don’t really understand how subpixel AA needs any more “context” about where on the screen it’s drawn than does greyscale AA. In either case, you have to do the rasterization/AA of the text (stored/stransofrmed as vectors, recally) as the final step before compositing the pixels on the screen. It’s not like the greyscale rasterizer just rasterizes text into some bitmap somewhere and splats that onto the screen with whatever transform is needed — in that case, “zoomed in” text in an animation layer (e.g.) would be blurry.
In fact, the “issue” with minimizing the subpixel AA is precisely that: performing further manipulations after a vector has been rasterized/antialiased breaks all of the AA assumptions about sampling rates, etc. This is equally true for regular greyscale AA.
So, how is it, again, that rasterizing text (which boils down to drawing bezier curves, more or less, right?) at some fractional coordinate locations dictated by arbitrary transforms is harder to do with subpixel rendering? At the time of the actual rasterization, the pixel locations are known! Or is it something else, like compositing subpixel AA against an arbitrary background is harder than greyscale AA (which can use basically an alpha channel).
“This increases the horizontal resolution by a factor of three and occasionally gives glyphs a slightly coloured flare on their sides.”
Statements like this miss an important detail: greyscale features, especially thin black vertical lines, have colour flares in the opposite direction if displayed without sub-pixel AA. This is a fundamental limitation of the sub-pixel array approach to display designs, although higher-resolution displays will make it (even) less of an issue.
Apple has been pushing developers to prepare their apps for higher-resolution screens (and doing similar work themselves) since Tiger, but the screens are taking their time to come to market. However, I’d be surprised if there isn’t a 160 ppi Mac by the end of next year.
Tip: You can restore sub-pixel anti-aliasing in the Mac OS X menu bar by turning off the translucent menu bar. Go to System Preferences > Desktop & Screen Saver > Desktop.