Umar Patel

Software Developer for XR/UX Design

Specializing in Virtual & Augmented Reality, App/Web Development, and Data Science Research

Hi, I’m Umar! I recently graduated Stanford University with a Bachelor's and Master's in Computer Science, and am currently a Software Developer working on user experience frameworks for public policy at the Stanford Institute for Human-Centered Artificial Intelligence (Digital Economy Lab). I’m an innovative XR/UX developer passionate about creating immersive and seamless user experiences in wide-ranging applications for both 2D and 3D content. I am also a huge archaeology buff, having also completed an undergraduate degree in Archaeology, and love to apply technology toward history and cultural heritage applications.

I also have experience working in data science research and product design contexts, which you can read more about below or on the other pages on my site!

In my free time, I love trying out new seafood recipes, painting, and playing guitar (I am a huge Ed Sheeran fan).

I am always happy to discuss anything related to my work (or even if it’s unrelated!), so please don’t hesitate to reach out!

Email: umar2023@cs.stanford.edu or here.

Umar Patel

Bio

Before I began my graduate studies in computer science (with a focus in visual computing), I completed my undergrad at Stanford in 2023, where my long-lasting dual interest in problem solving and history led me to complete two degrees with a B.S. in Computer Science focusing in Artificial Intelligence alongside a B.A. in Archaeology.

As an AR/VR developer working in Unity 3D for the Oculus Quest, I look to develop immersive and highly engaging user experiences to help assist in educational and performing arts contexts. My past projects have included immersive experiences in contexts such as museum heritage, educational VR for subjects such as physics and geography, language learning, and even theater and improv performance. In all such experiences, I aim to incorporate LLM-assistants and high performing speech-to-text handlers that leverage AI to create more natural and seamless user experiences in wide-ranging applications and immersive contexts.

Description of right image
User-testing theater performance and education VR experiences.
Description of left image
Presenting an immersive French language learning experience at the Stanford XR Conference.

Additionally, during my time as a research assistant for the Human-Computer Interaction lab, I delved into Apple’s new visionOS framework to develop multimodal interactive AR experiences for the Apple Vision Pro. I leveraged the framework optimizing for gaze, gestures, and voice recognition to develop more organic methods for user interaction with the world in AR contexts. I continued building spatial computing applications for visionOS at the David Rumsey Map Center, where I develop interactive map experiences for historical cityscapes overlayed with modern maps and data. Ultimately, I aim to continue exploring new frontiers of AR/VR development through all my research and development work.

Adjacent to XR development, I have also delved into iOS product design through developing user experiences on mobile platforms, including a study abroad application to help students get acquainted with their host city (as part of my time as the Oxford ambassador for the BING Overseas Study Program), and a social scavenger hunt platform made for college students. These applications utilize the MapKit API, incorporating data location services and map interactivity with mobile applications.

Over the last several years at Stanford, I have also been working as a Data Science research assistant for the Center for Spatial and Textual Analysis, where I am working to develop robust vision encoder models that accurately recognize handwritten texts for non-western and low-resourced languages, particularly Arabic and Early Modern Ottoman. I always find joy in applying my skills in XR and machine learning development to fields that have historically been removed from the advancements of such technologies and tools, and it is certainly the most rewarding part of my work!

Description of right image
Presenting SwiftGenieXR, a multimodal visionOS framework for augmented reality.
Description of left image
Testing out my app on the Apple Vision Pro.

Recently this past summer, I developed an immersive historical map experience for the Apple Vision Pro, leveraging SwiftUI capabilities with MapKit and RealityKit within the visionOS framework to engage users with interactive map exploration and pop-ups of historical Paris overlayed on top of modern satellite imagery. In this experience, I also developed an immersive gallery space that allows users to explore the history of key points of interests in further detail, exploring the fully immersive space and its effects on creating engaging content for VR users.

My Other Adventures

Rowing Adventure
Me rowing for Brasenose College at Oxford.
Washington Monument
Sizing up the Washington Monument.
Painting with Friends
Painting with friends!
Me with Lego Shakespeare in London
Visiting Shakespeare in Leicester Square.