+John A. Tamplin
NT fails a strict, classic definition of microkernel, because it runs relatively lots of stuff in ring 0. But as you know the kernel is broken into several layers, everything above the HAL & kernel drivers is highly modular and portable. Since Vista it also moves most drivers to userland too. But most important, different also from traditional monolithic OSes, NT implements the bulk of OS services in userland: take filesystems for example, NTFS is all userland, but it relies on kernel services such as the I/O Manager, Security Manager, VMM and Object Manager.
Another way to put that is that the original concept of microkernel had VASTLY underestimated the complexity of the kernel. NT only includes the really fundamental things in the kernel (which makes it conceptually a microkernel), the catch is that it has very sophisticated fundamentals. For example, both LPC and the I/O Manager are just ways to invoke kernel services via messages (a concept you find in the classic microkernel theory), except that NT does these things in a really feature-rich, flexible, efficient way, including things like asynchronous dispatch, cancellation, prioritization etc. For another example, NT says that you really need a rich security system even at the lowest levels of the OS, because you want fine-grained access control of the most fundamental kernel objects. Now if you make such assumptions a small microkernel is impossible, there's no way to have these things implemented in userland — it's not just an optimization.
The big compromise of course was the graphic subsystem, not originally but post-v3.51 (IIRC) they had stuffed the old GDI into the kernel purely on performance reasons. But that's a hard problem, graphics/GPU are a big corner case in the entire PC architecture. And I think they finally solved this problem too with WDDM user mode / miniport graphics drivers.