I don’t exist in VR

viveWhen we met in the Lava Lab, I had a chance to try out the Vive. Anna and Brian talked about how great the Vive was compared to the Oculus. So I put the headset on. Anna said, “Wait, you need to hold the controllers.” So I took the headset off, grabbed one controller in each hand, and I nudged the headset back over my head. Hold on, where are my hands?!

The image above shows what I initially saw. I have no hands!

I can’t see my hands in VR. This freaked me out. The controllers are there, I can move them, and they show where my fingers are pressing the touch controllers.

But I can’t see my hands!? I can’t see any of myself in VR.

The controllers are there in VR, but there is not a single trace of my physical body in VR. I get that it’s a virtual experience but this was shocking to me.

Contrasting VR to the Magic Leap technology of AR, I can’t help but feel like this is another way that the “humanity,” the humanness of a person, is not rendered in VR.

Only our actions are represented in VR. We are not.

I don’t exist in VR.

These serious considerations will come into play when are deciding how to use this technology and determining what to leverage going forward. I’ll be thinking about this experience for a long time. We are editing out humans in the realm of computers.

We have agency but we are only representations.

Advertisements

HoloLens is… meh

The other day I met with the MemoryDump crew at the Lava Lab. Anna and Brian are taking a high level elective that focuses on VR and the’ve had access to the new Oculus, Vive, and HoloLens all semester. I was eager to try out this new tech when we met in the lab.

I’ve been watching HoloLens developer videos on YouTube for a few months now. The way the HoloLens is marketed is kind of cringe-worthy. The experience of having it on is nothing like it’s advertised to be. Putting it on made me think, “This feels like a larger Google Glass – underwhelming.”

While the build and feel of the hardware is solid and top notch, the technology is lacking in the following ways:

  • Frame of view, while considerably larger than the Google Glass was limited and kind of lame.
  • The touch features require you to be looking at the object you wish to interact with.
  • When you pull windows to place them in 3D space, they aren’t as precise as I want them to be.

In order for the HoloLens to succeed, it needs to offer at least 180 degrees in the field of vision.

Coding C Arrays to hold multiple datatypes simultaneously – Part 1

img_2167Yesterday I pivoted to C with J. I had him start to build out an array for abstract datatypes in C. Initially we were focusing on writing pseudocode with sorting algorithms. Last week J concretely stated that he wants to work for the NSA doing cyber-security. The decision to pivot to writing all whiteboard code in C was instant.

If J is going to work for the NSA he needs to get really really really good at C really really really fast. C is the grandfather of so many programming languages.

I had to decide what problem to have J code. The hardest things for me, in learning C, have been pointers, addresses, and the varying compiler-specific-differences in memory size allocation.

With this in mind, I struggled to come up with a project to illustrate the power and limitations of C.

Enter the almighty array.

I remember taking a poll on Twitter a few years back with my programmer friends. This is going to sound lame to any non-programmers, but the poll question was:

“What’s your favorite data structure?”

Instantly, I responded, “The almighty array!”

All of computer science can be summed up in 1 word: lists. We’re interested in manipulating these lists in C with a data structure called an array. Arrays give us the ability to sort our lists, tell you what is contained in each element of the list, remove something from the beginning of the list, remove something from the end of a list, add something to the beginning, add something to end, append, and prepend. Unfortunately, C is so old, it wasn’t made to hold multiple items of different types in the same list.

“Passing an array of known size by value would require pushing all the array content on the stack. This feature would encourage memory consumption, when memory was sparse, and moving the memory was slow. Passing an array of variable size would have required pushing the size and the content, and calculate dynamically where to find the remaining arguments on the stack. An unacceptable overhead for an OS developer!” – StackExchange

C is an old programming language that was created at AT&T in the mid 1960’s to early 1970s by Dennis Ritchie. This one dude created an computer language then rewrote the Unix operating system. It’s what OSX on Macs is based on.

One of the draw backs of C is that it does not have “first-class” arrays.

This limitation is exactly what I want J to code around. I’ve asked him to come up with ways to do array operations in C. The operations we’re interested in coding up are: pop, push, append, prepend, and length.

He’s started with length in pseudocode. I’m excited to see how this turns out. I’m interested in seeing how he stores various datatypes in the array. We’re going to be working with storing strings, ints, chars, and pointers. I’m excited to see how J tackles this problem set 🙂

Building a cockroach horror game in virtual reality for the Oculus Rift

img1

I had a wonderful opportunity to create a game for the Oculus Rift with the MemoryDump virtual reality group at the University of Hawaii at Manoa. We came up with the idea after my wife moved to Hawaii. She was afraid of the roaches in our apartment. Jokingly, we talked about how a person is meant to cure themselves of the fear of roaches.

Then we got the idea to create a cognitive behavioral therapy (CBT) game. The idea is that you wake up in a dark room. You hear nasty noises in the dimly lit room. As you come to your senses, you see tiny roaches crawling toward you. 

The goal of the experience would be to endure the roaches for as long as you can. You would endure the terror to acclimate to the new environment. Sounds a lot like any virtual reality (VR) experience doesn’t it? 🙂

As we continued to work on the project, we pivoted to make the game more approachable to the average player. We moved the setting of the game from a dimly lit room to the computer science lounge.

We also created a slipper to swat the roaches and kill them. Finally we created a game loop and demoed the finished prototype at the Information and Computer Science (ICS) end of year celebration.

Lessons learned:

  • Unity game programming
  • 3d modeling in Blender
  • Character rigging in Blender
  • Animating in Blender
  • Interacting with 3d space in Unity
  • Coding in C#

Here we are playing the game on the big screen in the Lava Lab:

img2

Interested in learning more?

Check out our project on github: DebuggerPlusPlus

Swift Pair Programming – Session 5

screenshot-from-2016-11-10-100142Yesterday, Nick and I pair programmed Swift in Screenhero. We’ve been doing sorting algorithms over the past few weeks. This time we discussed changing up the game plan.

During our previous pairing sessions we created a playground in a swift ios project. Apparently, creating a playground in a project prevents you from getting real-time compilation and console output.

Another reason for working with Swift in a project, rather that just a playground, is that we’ll be able to utilize some of the libraries and frameworks that are available in a full project. To be honest, I’m not sure how to import a library or framework into a Swift playground.

Therefore, after listing these concerns with Nick, we decided to pair program within the confines of a project. In the past, Nick mentioned wanting to do some image manipulation. I’m most interested in setting up a single point of authentication with OAuth.

Combining these two desires together, we decided to code up an Instagram clone for our pairing sessions.

Last night we found p2/OAuth2, an open source OAuth project on github, that is written in Swift (our pairing language). I was about to clone the repo and import the contents into our project when Nick showed me some awesome new technology called CocoaPods.

Before I talk about CocoaPods, let me explain about package managers. Package managers are like magically updating software managers that pull the latest updates from an internet connected manager and bring all the files, dependencies, and configurations into computer automagically. All the computer scientists just cringed a little from my explanation. That’s fine. It works for me.

Usually these package managers, like homebrew, npm, and apt are used to update the packages on your computer. With CocoaPods the community has brought together a package manager for iOS applications!

We created a pod to download p2/Oauth2 and spent the rest of the pairing session configuring the pod and attempting to connect to Twitter’s API to get a list of Nick’s Twitter followers.

We didn’t manage to get the authentication working last night. Next week we’ll get the application to authenticate. Then I’d like to work on integrating Facebook. Then IG, if that’s possible.

This week, to prepare for our next programming session, I’ll be testing out the p2/OAuth2 code from my personal machine at home. And I’ll also be doing Ray’s tutorial on OAuth2 in Swift. Nothing like being prepared, setting an agenda, and checking off items on a list 🙂