A hallmark of human robust intelligence is the ability to flexibly and dynamically adapt behavior to the current situation. For dialog behavior, this entails adaptation to features of both the dialog partner (e.g., relationship, age, personality) and the dialog situation (e.g., task context, asynchronous interaction, limited modalities such as voice-only communication). We don't completely understand how humans do this, nor do we have the ability to produce such dynamically adaptable behavior in human computer dialog interaction. In this talk I will discuss our recent work on dynamic adaptation to the user, and present some experimental results showing that it is possible to automatically generate both verbal and nonverbal system behaviors that are perceived by the user as reliably expressing particular system personalities. I will describe two of my current projects at UCSC that are integrating these capabilities into mobile dialogue systems: SpyFeet, a role playing augmented reality game for encouraging girls to exercise, and Skipper, a dialogue system that gives pedestrians directions in both urban and campus environments.