Video • Assets
-
#1 [0:00]: P3A2 Logo // a film project, that I created with a remote-team of Masters students at Carnegie Mellon's ETC (The Entertainment Technology Center at Carnegie Mellon University) over a 2-week period. This was a team project, on which I was both the Designer and Sound Designer, using primarily AfterEffects to create the content seen in this clip.
--- An opening clip meant to introduce films created by our student film studio. This animated, introductory logo uses layered, artistic effects to evoke both an emotional punch that represents the type of work we created, and vivid, sonic branding that aligns well with the cinematic pieces we collectively produced
#2 [0:10]: Temporal // a short film (realtime rendered in 3D) project, that I created with a multilingual and international team of Masters students at the Carnegie Mellon ETC. This was a team project, on which I was both the Technical Artist and Cinematographer, using primarily Unreal to create the content seen in this clip.
--- An award-nominated short film made to discuss the cyclical nature of Buddhist reincarnation as seen through the eyes of a protagonist who is able to "wake up" from the illusory world and see reality for what it is. Enlightened though he may be, he experiences the potential nirvana of his detached enlightenment alone, without the ability to either communicate or wake up the others from their individual illusory experiences. Paralleling the Buddha himself, our character chooses to reenter the pod, thereby sacrificing his own ability to independently reach Nirvana-- knowing that, it is only though his re-entry into the illusory "Maya" world that he can hope to bring others out of their illusions, forgoing his own release to further extend the chance at reaching that light to the others left behind.
#3 [1:15]: Unveil // a 3D Interactive Experience project, that I created with a multilingual and international team of Masters students at the Carnegie Mellon ETC. This was a team project, on which I was the Localization (Chinese / English), using primarily Unreal to create the content seen in this clip.
--- Translated the in-game Chinese poetry into similarly metered, rhythmic English poetry, working to match both the feel and content of the original Chinese poet as precisely as possible to convey the experience of reading it and its message properly.
#4 [1:30]: Modal Train and Realtime Video-Stream Compositing // a Mixed Reality project, that I created with the Pittsburgh Children's Museum/MuseumLab at the Entertainment Technology Center. This was a solo exploration in the context of a team project, on which I was both the Technical Artist and Programmer, using primarily C# to create the content seen in this clip.
--- Using Unity and a green-screen (or even a background-removed video feed from Zoom) to create an interposed person right in the middle of a Unity scene. This person can be lit (and cast shadows) inside of a Unity scene, dynamically. Lighting affects the figure, as does any on-screen or in-scene VFX. Additionally, any sound from the person is spatialized from that location, to overall create an extremely useful tool for seamless documentation of VR (or immersive 3D experiences) and compositing extremely powerful marketing videos at 1:1 real time speeds with no additional editing needed whatsoever.
#5 [1:52]: Plant in a Place - Fractal Growth Mechanic // a 3D interactive experience project, that I created with a talented group of friends. This was a self-driven team project, on which I was both the Programmer and Technical Artist, using primarily Unity to create the content seen in this clip.
--- This project was a game in which the player was able to play as a plant, seeded in an otherwise barren planet, that had been burned by the life that had previously lived on it. Playing as a plant is a certainly unconventional mechanic, and had the potential to make or break the game based on the degree of fun that was possible in playing as a plant. To give a strong sense of control to the player, while also affording the player a type of mechanic that would allow them to really FEEL like they were a true plant, I created a dynamic, fractal-based growth mechanic that would use both the directional decisions you made, the presence of light and water, and relative spirals and branches that would mirror and offer natural symmetry with respect ot the moves that the player consciously chose to make-- since they were simply able to control a lone, growth tip on a single bud of a single branch, which became otherwise insignificant as the entire plant began to grow.. This offered players agency, control, and plant-like senses of presence that I believe are otherwise unseen and non-existent in the gaming world.
#6 [2:10]: VESP - Electromagneticeptive Apiary // a VR experience project, that I created with the Pittsburgh Zoo/Aquarium as well as the Elephant Lab at the Carnegie Mellon ETC. This was a team project, on which I was both the Design Director and Producer, using primarily Unity to create the content seen in this clip.
--- This project was a prototype related to the one prior, but leveraging eletromagnetoreception that is present in both bees and the charged flowers they pollinate, to allow guests to see the discharging flowers visually and haptically, to experience how bees know to avoid flowers that previously have been visited by fellow bees without needing to check them manually.
#7 [2:20]: VESP - Infrasonic Pachyderms // a VR experience project, that I created with the Pittsburgh Zoo/Aquarium as well as the Elephant Lab at the Carnegie Mellon ETC. This was a team project, on which I was both the Design Director and Producer, using primarily Unity to create the content seen in this clip.
--- A VR experience that allows guests to experience the otherwise invisible and inaccessible senses and perceptions only afforded to animals. This experience focuses on Sonic and Haptic feedback that conveys the sensation of infrasonic, a sense that allows elephants to communicate across huge distances, feel oncoming weather days in advance, and detect seismic activity with ease. We provide the guests with both this infrasonic sense in VR, an elephant guide to show them how to use it, a docent to guide them in the experience, and the goal of "reuniting with this elephant's family" to drive the players to engage with this sense, explore its use, and implement it to achieve a goal, thereby broadening their horizons of understanding the natural world, and providing a pivot point for empathy with animals and the environment immersively, like never before.
#8 [2:25]: As Above, So Below (A Dimension-Shifting Puzzle Roguelike) // a project, that I created with self-determined goals and personal timelines. This was a team project, on which I was both the Programmer and Artist, using primarily Unity to create the content seen in this clip.
#9 [2:32]: Echoes // a Location-Based Entertainment project, that I created at the Yale School of Art. This was a dual project, I served as experience designer, working with Shader Programmer Robert Gerdisch, to create an immersive, black-box theater navigation tool in pitch black darkness, that superimposed a virtual, augmented ring shader to allow guests to navigate the room through only ground-surface vibration patterns. Unity was used to create the content seen in this clip.
#10 [2:42]: Bridge Generator (Procedural) // a short film (realtime rendered in 3D) project, that I created with a multilingual and international team of Masters students at the Entertainment Technology Center. This was a team project, on which I was both the Technical Artist and Cinematographer, using primarily Houdini to create the content seen in this clip.
#11 [2:49]: Waterfall Generator (Procedural) // a short film (realtime rendered in 3D) project, that I created with self-determined goals and personal timelines. This was a solo project, on which I was both the Technical Artist and Cinematographer, using primarily Houdini to create the content seen in this clip.
--- Procedural controls in Houdini that allow for width, height, water turbulence, and rock hugging bumps and planes to be instantly or dynamically specified, after which point the resulting FBX will self-tile its UV maps to create an appropriate level of detail, while still following the proper flow and geometric distribution to match up the geometry with the subsequent shader that renders it in-engine
#12 [2:55]: Watercolor Waterfall and Mist Shader // a short film (realtime rendered in 3D) project, that I created with self-determined goals and personal timelines. This was a solo project, on which I was both the Technical Artist and Cinematographer, using primarily Unity Shadergraph to create the content seen in this clip.
--- A waterfall generator and associated shader for both itself and the mist surrounding its contact with the ground, based on old East-Asian watercolor styles designed to represent the flow of each component as an individual frame of a watercolor painting.
#13 [3:00]: Neuron Generator (Procedural) // a short film (realtime rendered in 3D) project, that I created with a multilingual and international team of Masters students at the Entertainment Technology Center. This was a team project, on which I was both the Technical Artist and Cinematographer, using primarily Houdini to create the content seen in this clip.
--- Procedural controls in Houdini for neuronal complexity, axon length, dendrite density, nuclear size, and astrocyte geometry
#14 [3:10]: Harpoon // a VR experience project, that I created with the Pittsburgh Children's Museum/MuseumLab at the Carnegie Mellon ETC. This was a team project, on which I was both the Technical Artist and Programmer, using primarily C# to create the content seen in this clip.
--- Pull Back Scale Traversal shows how I made a dynamic, sonic variation mechanic that allows a single sound to be pitch-shifted up/down programmatically by affecting its sampling in order to match up with a given set of tones in a particular scale and with a particular mode (major/minor/pentatonic,etc) and key (F#, etc)
#15 [3:18]: Acchordian // a VR experience project, that I created with the Pittsburgh Children's Museum/MuseumLab at the Carnegie Mellon ETC. This was a project, on which I was both the Technical Artist and Programmer, using primarily C# to create the content seen in this clip.
--- This follows the same general format as the prior example, adjusting pitch and relative scale frequency to match tones from a single source audio file.
#16 [3:25]: Leveraging Intel Real-Sense Depth-Cameras for Room-Scale, Personal Motion Capture // a project, that I created with self-determined goals and personal timelines. This was a solo project, on which I was the Full-Stack Creator, using primarily C# to create the content seen in this clip.
#17 [3:29]: The Virtualian Man // a project, that I created with self-determined goals and personal timelines at the Yale School of Art. This was a solo project, on which I was the Full-Stack Creator, using primarily Maya to create the content seen in this clip.
#18 [3:31]: Opia ("World's Apart" ) , winner for Immersive Media design in the Ivy League Film Festival, and reviewed in Educause https://er.educause.edu/blogs/2018/10/opia // a project, that I created with self-determined goals and personal timelines at the Yale School of Art. This was a solo project, on which I was the Full-Stack Creator, using primarily Unity to create the content seen in this clip.
#19 [3:37]: Mind's Eye (Photogrammetry seen uniquely through each character's literal heads) // a project, that I created with self-determined goals and personal timelines at the Yale School of Art. This was a solo project, on which I was the Full-Stack Creator, using primarily Unity to create the content seen in this clip.
#20 [3:41]: ML Painting 3-Dimensionalization // a VR experience project, that I created with Google at the Carnegie Mellon ETC. This was a team project, on which I was both the Programmer and Designer, using primarily C# to create the content seen in this clip.
--- Working with Google Machine Learning and Research group, we created a system of different tools and experiences to reach those otherwise unfamiliar with ML and AI, to create accessible experiences that allow guests to feel connected with the inputs and outputs of an ML system to reach greater comfort and understanding of the algorithms that guide our lives behind the scenes. This was a simple, grouping algorithm that used K-means to generate groups of like-members. Typically, this sorting algorithm will use axial distance between group members (such as points that have 3 axes of data, often called 'XYZ)'. Rather than use spatial data of points, which is hard to visually, intuitively understand, we instead operated on Pixel data of paintings to generate proximity-based groupings based on 3 alternative, more visual variables that can also be graphed and analyzed: RGB from the pixel data of paintings.
#21 [3:46]: MOIC - Candy Pipe Generator (Procedural) // a Location-Based Entertainment project, that I created with the Museum of Ice Cream at If Magic. This was a solo project, on which I was both the Design Director and Technical Artist, using primarily Houdini to create the content seen in this clip.
#22 [3:52]: MOIC - Sundae & Unmelting Shader (Procedural) // a Location-Based Entertainment project, that I created with the Museum of Ice Cream at If Magic. This was a solo project, on which I was both the Design Director and Technical Artist, using primarily Unity Shadergraph to create the content seen in this clip.
#23 [3:58]: PlantMail (A Proposed Diegetic Interface to create a pleasant, AR-friendly Skewmorphism for Email-Inboxes of the Future) // a project, that I created at the MIT Media Lab. This was a solo project, on which I was both the Programmer and Technical Artist, using primarily Unity to create the content seen in this clip.
#24 [4:05]: Neuroanatomy AR (3D neuroscientific visualizations in AR to replace textbook illustration learning models for 1st year Yale medical students) // a project, that I created with the Yale School of Medicine at the Yale Center for Collaborative Arts and Media (CCAM). This was a solo project, on which I was the Full-Stack Creator, using primarily Vuforia to create the content seen in this clip.
--- https://www.wesson.zone/coverage
#25 [4:13]: Orbitals (Novel VR interactions through Inertia-Based Control) // a project, that I created with the Yale Blended Reality Program, while reporting my findings to HP Research at the Yale Center for Collaborative Arts and Media (CCAM). This was a solo project, on which I was the Full-Stack Creator, using primarily Unity to create the content seen in this clip.
#26 [4:19]: Zero-G (Novel VR Flight Interactions through Wrist-Mounted Controllers) // a project, that I created with the Yale Blended Reality Program, while reporting my findings to HP Research at the Yale Center for Collaborative Arts and Media (CCAM). This was a solo project, on which I was the Full-Stack Creator, using primarily Unity to create the content seen in this clip.
#27 [4:25]: Bulb (Novel VR Selection/Pickup Mechanics through Pressure-Based, Gas-Liquid Friendly Diegetic Grabber) // a project, that I created with the Yale Blended Reality Program, while reporting my findings to HP Research at the Yale Center for Collaborative Arts and Media (CCAM). This was a solo project, on which I was the Full-Stack Creator, using primarily Unity to create the content seen in this clip.
#28 [4:45]: Lyraflo End-Sequence // a project, that I created with fellow Masters students at Carnegie Mellon's ETC (The Entertainment Technology Center at Carnegie Mellon University). This was a team project, on which I was both the Technical Artist and Designer, using primarily AfterEffects to create the content seen in this clip.
--- An artistic 3D modeling and graphical interrelation to create an animated logomark for our team, which was creating a VR product that would better allow students to accessibly engage with music theory on their own terms. This final product was a personal exploration about how best to synthesize 2D and 3D graphical assets, as well as stylistic represent a custom ending that fit our brand (which had previously been using exclusively 2D assets) in the 3D manner we felt was vital to our core brand expression given the 3D content we hoped to provide to our client. This was rendered in combination across Adobe Dimension, Unity, AfterEffects and Photoshop