0%

Time: Jul 2019 - Feb 2020
Location: Dassault Systemes Solidworks Boston Campus eDrawings team
Role: Main Developer

More like the redesign of HoloLens 1, HoloLens 2 has many great improvements including better processing power, larger field of view, more detailed hand tracking ability, better comfortability, and eye-tracking ability, etc. Since we already have HoloLens1 development experience and want to continue further along this path, we started to partner with Microsoft to develop our CAD Model viewer app on the latest Mixed Reality platform to enhance our app by integrating unique HoloLens 2 features into our app.

Hololens2

In this project, I work very closely with our Product Manager, Project Manager, Product Definition Engineer, User Experience Designer, and Quality Assurance Engineer in a team with Agile methodology to create this app.

We mainly use the Unity 3D engine, Unity assets, and the MRTKV2 to build our app. Similar to our HoloLens 1 app, our HoloLens2 app could load model(s), reset, drag, rotate, scale and mate model. We integrate all the new HoloLens 2 and MRTKV2 features into our app so our app is more intuitive and easier to use with new hand gestures, better software and hardware performance of HoloLens 2 and updated Unity assets.

Due to Microsoft policy, we could not expose any photo or video of the actual HoloLens 2 device so there is only world description for now. But if you are interested in Hololens 2, there are some official videos and articles that I recommend and you could take a look:

Time: Dec 2018 - Jul 2019
Location: Dassault Systemes Solidworks Boston Campus eDrawings team
Role: Developer

One most significant limitation of mobile devices including HoloLens is that the processing performance of the mobile devices is far behind the desktop devices. That’s where the idea of HoloLens streaming solution comes from. What if we could use the desktop device rendering power and also enjoy interacting with the AR devices? So we develope the streaming solution

A brief introduction for most of our components in our app:

Load Model

User could open a Solidworks file just like eDrawings VR. Then user needs to run the Holographic Remoting Player app on Hololens and make sure the desktop and HoloLens are under same network. Click the AR button on eDrawings UI and enter the IP address of HoloLens, After the successful connection, user could see the streamed model from the desktop.

Model Toolbar

The Model Toolbar includes five buttons: Explode, Scale, Rotate, Drag and Reset. The toolbar is in tag-along mode, which means it is always in the user’s field of view.

Explode

Similar to the eDrawings explode feature, user could explode the model on HoloLens. Every component is moved at a certain distance in a certain direction from the center of the bounding box.

Scale

Loading a CAD model in real size is very important, so the app has the Scale feature. After the Scale button is pressed, the active model is scaled to one to one size. After the Scale button is depressed, the active model is scaled back to the table size.

Rotate

In this mode, user could use the airtap and hold gesture to rotate the model. This app only allows horizontal rotation.

Drag

In this mode, user could use the airtap and hold gesture to move the model.

Reset

The Reset feature resets the active model into the initial state.

Time: May 2018 - Dec 2018
Location: Dassault Systemes Solidworks Boston Campus eDrawings team
Role: Main Developer

In 2016, Microsoft released the first generation of HoloLens, HoloLens 1. More than traditional desktop devices, user could view the content in real 3D space. Compared to Virtual Reality devices, HoloLens could scan the physical environment and generate spatial data. The primary interaction method is the head movement, hand gesture, and also potentially controllers. Viewing the 3D CAD model in 3D space is definitely a powerful tool for CAD industry. So we start developing HoloLens 1 CAD model viewer app.

In this project, I work very closely with our Product Manager, Project Manager, Product Definition Engineer, User Experience Designer, and Quality Assurance Engineer in a team with Agile methodology to create this app.

A brief introduction for most of our components in our app:

LoadFile UI

Open the app, the first thing that shows up is the LoadFileMenu (in Follow Me mode). The app displays model buttons if this app is used before. If no sync is performed before, the menu is empty. The user could click the model button to load models. The LoadFileMenu could display at most 6 buttons on a page. If there is more than one page, Next page button and Prev page button shows up on the UI. Local/3DDrive button is to switch between syncing model files from HoloLens Local storage or 3DDrive cloud storage. The Prev/Next button is for going to the previous page or next page. The Settings button is used for launching the Settings menu. The Sync button is used for opening the Sync menu.

Settings UI

​Settings Menu includes Hololens Local Storage target path and a switch to turn off/on the Mate Restriction Placement. By default, it’s on.

Sync File UI

​User could use the Sync File UI to start syncing model files from HoloLens’ local storage. After the Sync button is pressed, the app starts syncing files from HoloLens’ local storage to app internal storage. The circle progress bar shows the syncing status. Once finished, the Sync File UI displays ‘Sync Finished’ so that user could go back to LoadFile UI to load models.

Load Model

​After user airtaps one model button on the LoadFile UI, the app starts taking GLTF files and import the model. During the loading period, a loading box in Follow Me mode notifies user the model is still loading. User could cancel loading any time by pressing the Cancel button. After the model is loaded, it is in table size (so it is easy to control) and in follow me mode. Next, user could initialize the model by simply air taping it so the model exits follow me mode and the Model Toolbar is attached to its MRTK bounding box.

MRTK Bounding Box

​This app uses the MRTK Bounding box for user-friendly manipulation. The MRTK Bounding box allows user to move, scale, rotate the model easily by using hand gestures without any restriction so that user could view the model in any size, from any position or from any angle.

Model Toolbar

​The Model Toolbar includes five buttons: Reset, Scale, Mate and Clear. As user moves, the model toolbar actively changes its position so it always faces the user. If the attached model is mated, it uses the stored mated date to determine its position, so it is not blocked by Spatial Mapping mesh.

Reset

The Reset feature resets the active model into the initial state. This includes resetting the active model back to table size, and back in Follow Me mode. This action also clears any stored mated data.

Scale

​Viewing the CAD model in real size is very important, so the app has the Scale feature. After the Scale button is pressed, the active model is scaled to one to one size in two seconds. After the Scale button is depressed, the active model is scaled back to the table size. The process is animated.

Mate

​This app supports two mate modes.

The first one is Magnetic Mate for fast alignment. In Magnetic Mate mode, user could select one bounding box face of the active model and then the active model follows user’s head gaze position with the selected bounding box face attached to the Spatial Mapping mesh.

The second one is Shadow Mate for precise alignment w/o offset. By using our unique Shadow Mate tool, user could mate the active model to multiple surfaces with the target offset easily and efficiently. In the Shadow Mate tool, when user looks at the distance rulers on the ruler part, a green slider appears and it includes text showing the precise distance from the Spatial Mapping mesh. User could either airtap the slider or the Mate button to place the active model

Mated bounding box faces are stored and used for the model to keep its alignment in other modes.

Clear

The Clear feature deletes the active model from the scene.

Multiple Models

This app also supports handling multiple models. User could load and apply any feature into any model.

Time: 3 Mar 2017 - 6 Mar 2017
Location: Boston Raizlabs
Prize: Virtual 2 Reality Challenge powered by Discover Second Prize, Invited to Innovation Project Conference
Role: Hololens Unity developer

Personalizes marketing powered by AR in locations such as subway stations and airports, etc. DFS would rent these marketing spaces to merchants. Based on previous purchase history, people walking by would be served a personalized advertisement to them with the option for quick purchase.

Current traditional advertisement system does have a lot of problems. Though most billboards may be viewed thousand times in a day by many people, most of us would ignore it or not interested. For customers, most ads do not match our need. That's the key problem.

The second problem is that most billboards could only show one or few ads most of the time, which causes the cost to be very high because its efficiency and flexibility is so bad. Many merchants have to spend a huge amount of money on ads but the conversion rate is so low.

The third problem is the timeliness. When a customer finally see an advertisement that he/she is interested in, he/she can not purchase it immediately. Instead, we need to wait after going back home and the order on a computer or use a mobile phone, which again, is very inefficient.

Now, we provide one simple solution, the game changer, Your Ads system. AR is a perfect solution to this problem. Based on your personal preference or history, Your Ads system will give you the personal ads on billboards. ​

​For the customer, we get what we want. We do not need to tolerate any boring ads, instead, we always see ads that we are really interested in. And the customer does not need to wait anymore. By using Hololens or mobile phone, the customer could finish ordering instantly because all the transactions and payment will be handled by Your ads system.

For merchants, the cost will lower down greatly because by using the Your ads system, one billboard could load thousands of ads and the conversion rate will be much higher.

We did some simple estimation for the overall revenue. There will be three major parts: Charges to Merchant, Interchange fees, and newly opened accounts. Take the Boston Logan Airport for example, the revenue will be around $235000.

By using the AR technology, we could make the ads become interactive with the customer. Instead of 2D photos or videos, the customer could actually see the 3D live model and could drag or see the product from any angle. The whole system could be controlled by voice and hand gesture.

By using the machine learning technology, the Your Ads system could find your preference and presents what you like.

The digital payment process will use the modern mature technology to make sure the transaction is secure and fast.

Time: 19 Nov 2016 - 20 Nov 2016
Location: MIT Media Lab
Prize: MIT Media Lab Hacking Arts Hackathon Hackers' Choice Award
Role: HTC Vive developer

Inkfinity provides a new way to explore the art world without limits. By using the VR technology, the user could enjoy the art in a different world. Multiple dimension interaction will give the user totally new feeling about the art.

Current museum/gallery experience viewing fine arts, especially classical east Asian arts, is often confusing and alienating. Viewers may wonder about the story, intention and cultural context underlying the artworks, but current services at the museum do not effectively address those questions. ​

We leveraged VR technology to create an immersive, interactive and personalized poetic journey, where the viewers will enter the world depicted in the artworks, explore the aesthetics in full details and experience the cultural ethos first-hand.

We built the entire product out of scratch, partly due to the complete lack of available assets on ink art motifs/texture/text font in VR community.

Plus, the east Asian ink paintings are renown for multi-focal points and intentional “leaving space blank” (to leave up to viewer's imagination) - we had to improvise how the 2D would transform into the 3D, without obscuring the artist’s intention but still exercise our creative muscles to present our interpretation of the cultural heritage.

Inkfinity could be applied in many areas. For art institutions, the user could enjoy the art in VR world and view the artistic work in 3D rather than 2D. It could also be used in education. For example, anyone could recreate the current artistic work in the VR world and add their own ideas in the worldwide famous artworks. For private collector, the user could use Inkfinity to get a different angle to view the art.

Voila-R Group​

Time: 8 Oct 2016 - 10 Oct 2016
Location: MIT Media Lab
Prize: MIT Media Lab Virtually Reality Hackathon Up and Coming Prize top 3 project
Role: Google VR developer

Sometimes it is very difficult for us to get together watching a movie, or take part in an activity because we are so busy or maybe in two different regions far away. But when we have some free time, we want to be with our friends or family.

We try to use Virtual Reality to solve this problem. Virtual Reality is a great technology and specially useful for bridging the gap between us.

We use Unity 3D Game engine to build our project and we use Google Daydream Headset to test and load our project. Inside the project, we use Photon Realtime for multiplayer game implementation.

The biggest problem we have met is that none of us have any past experience with Unity 3D design or C# programming experience. We basically start from zero and we have to learn everything in order to achieve our goal. But in less than 48 hours, we made it work. Honestly, it is beyond our initial expectation.

The project is a VR-based cinema. Multi-users could use our project to watch the same movie and talk to each other. It uses eye-tracking control system so no extra controller needed.

The most important thing for us is the valuable experience. After the VR hackathon, every one of us in the team knows how to build a VR project.

In the future, we plan to extend our project not only to the cinema, but also any situation that could use live stream like drama or concert. We hope we could connect people by sharing fun experience.

Time: May 2016 - Sep 2016
Advisor: PhD. Judith Amores and Prof.Pattie
Location: MIT Media Lab Fluid Interface Group
Type: Individual Research

Mobile PsychichVR is the extension idea from PsychicVR, created by Judith Amores. It is an Andriod VR app that is integrated with MUSE brand sensing headband SDK. Cezar Cao finished the initial implementation and I am working on updating the app and putting more ideas into this app right now.

"We non-invasively monitor and record the electrical activity of the brain and incorporate this data in the VR experience using the MUSE headband. By sensing brain waves using a series of EEG sensors, the level of activity is fed back to the user via 3D content in the virtual environment. When the user is focused they are able to make changes in the 3D environment and control their powers. Our system increases mindfulness and helps achieve higher levels of concentration while entertaining the user." - quoted from Judith Amores.

Mobile PsychicVR shares the same vision but in a different platform. Currently, it supports single-player mode and multiplayer mode.

In single-player mode, you could control objects by using the brain power in Virtual Reality environment.

In multiplayer mode, you could interact with another user and use voice communication. We use Photon Realtime and Photon Voice to implement the connection and voice chat function.

First, open the app and open the Bluetooth in the phone. Then power the MUSE headband. Next step is clicking the "Scan” button on the canvas and the available MUSE headband will be shown in the list. Next, click the "Connect" button. In both single-player MPVR and multiplayer MPVR, the debug interface is provided and will indicate the status of the connection. After successfully connecting the app and headband, in the debug interface, the waveform will display the real-time brain-wave data including the alpha, beta, gamma, theta, delta wave and EEG wave. The user could gaze "Debug Mode" to enable the debug interface or "Disable Debug Mode" to cancel the debug interface. Notice, all the buttons on the canvas could be gazed at except the "Disconnect" button, but it only could be clicked, which is to prevent accidently hit this button and cause disconnection.

In the single-player MPVR, the main functionality is to train the brain activity by using the VR world. The specific algorithm is used to detect the focus level of the user. If the user is focusing higher than some thresholds, he/she will have "magic power" in the VR world to control objects. If the user is focusing and gazing at some interactive objects, the object could be controlled by the user and visualized brain power will lift the targeted object.

In our 3.0 version app, we do not need an extra Android app anymore. All the functions are integrated in one Android app. So the user do not have to jump between different apps.

In the multiplayer MPVR, the main functionality is to visualize the mentality of the users and make the communication between users more interactive. The user will be represented by a spirit model in the VR world. The spirit model will change based on the user's brain activity. The main changes are: the body of the spirit will become more obvious if the user is more focused. The mental atmosphere will change color and size based on the brain activity. Also, the users could talk freely in the VR world. The users could quit the VR mode at any time and reenter again. This version supports maximum 2 users at the time.