Over the last few weeks, I’ve been testing Google Glass and exploring how it can be used for educational purposes. I’ve also been wearing it during everyday activity to see how Glass fits in within everyday life.
This past week I had the pleasure of attending the Major League All Star Game events in NY. I decided this would be a great place to test out some Glass functions and further explore its possibilities. Here are a few of the highlights of the experience:
When you first where Glass, it feels a little odd, especially if you don’t normally where glasses. After wearing them for a day though, you really just forget that they’re there. The display glass is almost outside your peripheral vision, so the physical device is there if you need it, bit doesn’t distract or interfere with life when you don’t. That’s part of the benefits of Glass; it’s there, all the time, when you need it. Sure, my cell phone is with me too, but a wearable piece of technology like Glass redefines accessible technology. There’s no pulling out of a cell phone from your pocket, it’s just there. This opens up lots of possibilities.
I said that Glass doesn’t distract from life, and from a technology standpoint that’s true. From a social standpoint though, it can be distracting.
The bottom line is that Glass is new (it’s not even released commercially yet) and it is one of the first true wearable computers. For a lot of people, seeing someone wearing Glass provokes a “Why does that guy have a computer strapped to his head” style of reaction. That lends itself to distraction.
Most of the social distractions caused by Glass at the All Star Games were brief. Some consisted of nothing more than an odd “What’s that thing?” passing glance, or being pointed at as someone says “That guy’s wearing Glass” to his friends. When people asked me about it, the conversations were mostly short, mostly centered on “What’s it like?” curiosity.
I expect that over time (especially after Glass is released publicly) the novelty of wearable technology will diminish, and as wearable technology becomes mainstream, the social distraction of Glass will drop.
Pictures and Video Recording
I took a lot of pictures (and some video) during the three days at Citi Field. The majority of the photos were taken with my digital SLR. I also took photos using my iPhone and Glass.
As I mentioned earlier, one of the major advantages Glass has is that it’s just there, all the time. If I wanted to take a picture I could just press the shutter button or touch the touchpad and say “OK Glass, Take a picture.” I was impressed with Glass’s voice recognition, which in most cases was able to understand my commands despite the tremendous amounts of background noise from the crowd.
Where Glass succeeds in terms of convenience, it struggles in terms of image quality. Shown below are a few comparison images between shots taken with Glass and a similar shot taken with my iPhone.
You can see that the iPhone5 picture handled the lighting much better than Glass, partly because I was able to adjust the photo with a touch of the screen before taking it. With Glass, there’s plenty of internal adjustments (and automatic enhancements made via Google+), but you can’t make shooting adjustments to your pictures and video, at least not at this point. It wouldn’t surprise me to see firmware and possibly hardware enhancements made before Glass is released publicly.
The lack of ability to adjust your shots also includes framing shots. Glass does a great job of taking pictures and video of the wearer’s field of view. When most people take a picture though, they usually frame the shot in some way using their smartphone screen or via a viewfinder. With Glass, you can’t see how a still photo is framed until after you’ve taken it and the photo is displayed. Also related to framing a shot, Glass does not currently offer any way to zoom in on your subject.
I should point out that in well-lit conditions, the Glass camera performs very well. Here are a two shots I took during the day at the park using Glass.
As I mentioned, I suspect that the camera hardware and firmware will be adjusted in the commercial model of Glass. That said, the quality of the photos still works well enough to support any learning experience that might leverage it’s capabilities.
Hangouts on LTE
While at the game I wanted to test a Google Hangout. Unfortunately, this failed miserably. In truth, I expected it to fail. LTE connections are fast, but they still can be rough for streaming live video. Add that to the environment: A building with over 50,000 people, many of whom were pulling data from the same local cell towers.
In the test I ran, myself and one other person tried to do a hangout. I occasionally saw her pixelated image appear, but never heard her voice. It was, in a word, unusable.
Granted, this may have been an extreme test of the Hangout capabilities. That said, it’s still a test, and it still failed. Live video streaming from Glass has tremendous application possibilities for learning and performance. As with most video streaming environments, bandwidth availability will greatly impact the effectiveness of the video streaming.
This was another area of concern. Google’s official statement on Glass battery life is that it can last about a day with typical use. As a spec description, that doesn’t say much at all.
My usage of Glass on the day of the MLB All Star Game consisted of a few things:
- Glass was ON all day and tethered via bluetooth to my iPhone
- I took 20-30 still pictures
- I took a few videos averaging 2-3 minutes each
- I posted some of the pictures and videos to social media services
- I performed a few Google searches
Because I’m a power-user of my iPhone, I usually carry a mobile charger with me. This charger can also be used to charge Glass. It’s a good thing, because by the end of the day, the Glass had run out of charge twice. Battery life is something else I would hope to see improved before a commercial release.
Possibilities for Learning and Performance
There were a number of instances during the game where I could see learning and performance possibilities for Glass.
- I can very easily see Glassware designed to enhance your experience at a baseball game. For example, as a batter come to the plate or a new pitcher enters the game, supplemental information about the player could be pushed to the Glass screen, It wouldn’t be something that interrupts my viewing, but could instead provide me with information that might enhance it. The same concept of pushing contextual information to users can easily be adapted for learning and performance purposes, from learning more about an exhibit in a museum to getting step-by-step information related to a task.
- The video recording and/or streaming capabilities of Glass have tremendous learning and performance capabilities. Glass captures the video of an individual’s experience in a way that few cameras do. It creates a view that simulates seeing the world through the eyes of the person wearing the Glass, and that can be extremely powerful. Just look at this promo video from Google for a few examples.
- One of the people that stopped me to ask about Glass did not speak English well; his primary language was Spanish. We had a brief conversation about Glass, but it was challenging, as I don’t speak Spanish and he struggled to find the words needed for his questions. The somewhat ironic aspect of that situation is that the hardware of Google Glass will likely be able to assist me with a situation like that in the future. Imagine Glassware that translates for you in real time. Someone says something to you in another language, Glass hears it, translates it, and provides written translation on the screen, spoken translation via the speaker, or a combination of the two. This technology already exists in many formats, and is a nice fit for adaption as Glassware.
The MLB All Star Game was a a lot of fun, and bringing Glass with me was quite interesting. It not only helped me better understand the potential of Glass, but it also brought to light some of the potential limitations that need to be factored in.
What are your thoughts about these tests? Are there specific types of tasks you’d like to see me test as part of this series?