Newer ARM processors come with security extensions called TrustZone. TrustZone is designed to enable a secure environment for software. Effectively, TrustZone extensions "splits" an ARM processor in two domains of operation - secure and a non-secure. Each domain from the point of view of TrustZone non-aware code is generally identical. Each domain has 7 modes of operation (usr, sys, svc, irq, fiq, abt, und), and the secure domain also has an 8th mode - mon, which is meant for secure monitor code. Which domain the CPU is executing in is controlled by the NS bit in CP15 register c1. Most of the system control registers are banked, thus code in secure and non-secure domain more-or-less unaware of the other. The 32-bit physical address space is further extended by a bit, the secure bit, creating two separate physical address spaces, enabling memory-mapped secure devices as well as accessing secure-only memory. With appropriate hardware support, one can route specific interrupts to the secure domain, or carve out a a "secure" area of RAM. The usage scenario for TrustZone is for creating a secure nucleus which can be used as a boot-time root of trust and as foundation for security in a system. The code running in secure mode effectively owns the hardware in the system, and has access to secure and non-secure domains. The non-secure domain is largely unaware of the secure domain and in a well-designed system cannot access any secure resources.
An idea which immediately comes to mind is that TrustZone extensions could be used to simplify virtualization - after all, it enables running code in a "virtual" ARM processor. The secure mode would be used for the hypervisor, while the non-secure domain would be used for virtual machines. In practice, however, TrustZone in its current implementation was never really meant for generic virtualization.
The first problem which immediately comes to mind, is physical address space protection. Most SoCs containing TrustZone support that I have seen so far generally have a facility for "carving out" a region of RAM, making it visible in secure physical address space, which hiding it from the non-secure address space. This allows the secure code and data to be inaccessible from non-secure mode. However, if you have several VMs running, you will not be able to protect the physical address space of each from the other VMs. Worse, for existing designs all hardware is accessible via the non-secure domain, so the is no way to isolate a VM from messing with the physical state. Additionally, if you're basing your hypervisor on top of an existing kernel like Linux, defining the secure region in terms of base and length is fairly difficult, unless you hide half (or whatever) of RAM using something like the mem= boot parameter. All of these physical protection issues are not CPU issues and are solvable with a custom memory controller, which effectively will be an additional MMU, with physical->machine address tables describing the secure physical address space and the non-secure physical address space. However, unless you are designing a new SoC and device around it, you have to live with no physical address space protection between OSes. Effectively, that means that the non-secure domain has to run your own code, that will ensure that physical memory belonging to the secure domain or other OSes is not trampled - i.e. paravirtualization.
The other 90% of the iceberg does end up being an ARM issue. See, the secure monitor mode is meant for a secure monitor, which facilitates switching between secure and non-secure domains on a "secure monitor call" or interrupt. Code operating in the mon mode can access the banked R13/R14/SPSR registers in any secure mode. This is done by allowing secure code to transition to mon from any other mode. Now, code operating in the mon mode can toggle the NS bit to access non-secure versions of the control registers, but you can't access the non-secure banked R13 (stack), banked R14 (link) and banked SPSR registers. Not without resorting to large-overhead hacks (more on this later). So if you wanted to use TrustZone non-secure mode for VMs, you wouldn't be (easily) able to switch those while scheduling. Of course when coupled with the physical address space protection issue, this issue might just nudge you towards porting, say, the Xen Hypervisor to run as the non-secure OS and taking it from there :-).
Using TrustZone to run Xen side-by-side with, say, a Linux, Symbian or NT has definite advantages - no need to modify the "host" OS other than loading a driver implementing a TrustZone monitor, and pretty transparent and fast switches to the Xen hypervisor and thus other OSes. This provides a solution for devices where replacing the bundled OS or booting a third-party kernel is not an option... which as far as ARM devices go nowadays, is almost all of them.