Wednesday, 29 April 2015

Case Study - Good and Bad Interface

For these case studies, we decided to look at different types of interactions, rather than digital ones. We were asked to come up with one good example, as well as one bad example of 'haptic' interactions.

Bad Example (Motion Detection):

My bad example, is of some changing rooms that were at my gym back in Taupo. First you would open a door into a small corridor section, and then open another door into the main changing room. However, the lights didn't have switches, and were instead triggered and turned on when it sensed movement. The reason that I have put this into the bad example category is because of the delay in motion recognition.

Once you stepped foot into the pitch black corridor, you actually had enough time to find your way to the next door, and get inside the changing room before the lights started to flicker and turn on. Although it might not seem overly bad, having just that slight delay is so noticeable, and makes it appear as a bad haptic interaction.

The changing rooms are better off having the motion censored lights in the changing room, to avoid having people forget to turn off the lights. Therefore, less power will be wasted. However, the motion sensor would be a lot better off which a smaller delay, or do have the sensor at the first door rather than in the corridor.

Bad Example (Button):

Another bad example that deserves an honorable mention is the elevator that takes you up to the shop Football Central on Tory Street. Not only do the doors take forever to open and close (extremely slow speed), but there is also an unbelievably long and unnecessary delay when you get to the floor have chosen. As soon as you reach the desired level, the elevator seems to pause for at least 3 seconds, before jarring, and opening the doors at an agonizingly slow speed.


Good Example:

It was quite difficult to decide on a good interaction, as they generally go unnoticed or unappreciated due to their smoothness or efficiency of completing their intended task or action. The example I am going to use is will be the swipe-key system used at The Cube (Massey Accommodation). Each resident has a swipe tag which allows them by simply scanning their tag up against a small black box by the main entrance door. I feel like this interaction is nice and simple, and allows access only to those that stay there.

The small back swipe box detects the tags very easily as well, so even if the key is in your wallet, you can hold it the wallet (open) up against the swipe box and it will detect your key. Once the key is detected, the light on the box turns green, and then the doors open at a good speed.

The swipe tag system is quite a good way of ensuring security in the building, and it can even be manipulated to allow certain tags to access extra areas such as the staff room.

The fact that the swipe tags can be disabled so easily is a good way of ensuring that if a student happens to lose their swipe tag, the tag can just be disabled so that whoever picks up the key is unable to use it to get into the building.
Much like digital interactions, it seems a lot easier to notice the bad interactions, such as slow doors opening, light switch delays, or sensors not detecting movement in general. It's because we expect them to be efficient enough to allow us to continue on with our daily tasks without any bother, so as soon as we run into a bad interaction, it's easy to notice. In a way, the good interactions are those that go unnoticed; the ones that happen so smoothly and unconsciously that we don't even bat an eyelid.

Friday, 24 April 2015

Project Two

Following the final presentations today, we were introduced to what our next assessment would be. The blog posts from now on (above this post) will be for assignment two. The posts that can be found below this post are the blog posts for assignment one.

Tuesday, 14 April 2015

Final App

Follow the link to find our final app...

http://invis.io/CH2OWD6NE


Note: Follow the path of... Drink - Alcohol - (any 500m Bar)
This will ensure that you see what the end result will look like.
With our final presentation next Friday, we decided to meet up today in order to get it finished comfortably before hand-in. We made some slight changes to a couple of screens. The biggest change, was the change in layout for the "Suggestion page" which shows all of our suggestions in relation to the proximity of each location. Another small change we made was the feature of clicking on the map (on each final page) which then zooms it in slightly. We were going to look at getting the map to move as the user moved it, and zoom as they use the pinch and expand technique, although there were no features on inVision to do so, so we just stuck to a basic zoom. Some of our screens have been attached below.





















Overall, I am extremely pleased with the final outcome of this interface design. We got a lot of good feedback in the final class critique, and we have developed it further, so I feel like it is at an even more refined level than it was before. It was interesting to see exactly how useful the paper prototyping was initially, as it allowed us to quickly mock up our initial concepts, without putting too much effort into designing it digitally. The introduction to inVision then made this all so much easier. InVision offered an extremely quick and effective way to prototype our interface deign digitally, as well as output the prototype onto our intended device (iPhone 6). The continuity in terms of theme, and screen transitions (with the left and right pushes) is a successful and effective way of helping with the user interaction. The pushes help to show whether the user is in fact going forwards to the next screen, or going backwards to the previous screen. The colour scheme is very simple, yet effective. Each button has been created so that it looks like it is meant to be pressed, in order to make the experience easier for the user. Another way of making the process easier for the user, was to shorten the number of interactions required to reach the intended final result. We did this by removing a few filters, and having slightly more options on the proximity screen. This was done as the proximity of each location is actually a really important part of our app, as it is about finding places in Wellington close to where the user is. On the proximity screen (as seen in the 500m zone for Alcoholic Drinks), we have included the logo of bars in that proximity, making it easier for the user to identify the differences between each button. We only did it for a few, by making a specific pathway for the user in this prototype, as this allows a clear indication of how the final app would operate, without going through and making a whole heap of extra screens, which overall convey the same message of how the app works. In terms of group work, I feel like we both contributed a relatively even amount to the project, and we both shared similar idea on how our final app should look right from the start of the project. We found it quite easy to agree on each others ideas, although we weren't afraid to point out potential flaws, and find ways to avoid these possible obstacles. Both of us were very effective at managing our time, and we made sure we had all of the required work ready for each class. By meeting up regularly in order to discuss the app, as well as sharing files via dropbox, it made the whole project a lot easier and more successful. I have really enjoyed this paper so far, in terms of learning about user interaction with different interfaces, and it is crazy to see easy it is to notice bad interfaces now, whenever I am online or on a different app. I am looking forward to the final presentation next Friday, as it will not only be exciting to share our own app with the class, but it will be interesting to see how the other apps from the rest of the class have turned out. Now that this project is completed, I am interested to see our brief for the next project, and start coming with ideas for my next project.

Thursday, 2 April 2015

App Development


Following the critique last Friday, and even though we don't have class tomorrow as it's Easter Friday, Alfred and I decided to meet up and work on developing our app, so that we are finished nice and early in advance of the hand-in and final presentation.
We met in the university library, to develop our app based on feedback gained. Because of the positive feedback on the layout, colour scheme, and nice continuity throughout, we kept this the same. A couple of changes we made, consisted of reducing the number of interactions required to reach the final outcome. Therefore, we had it go from the search button straight to "food, drink or entertainment", and then from there, there was only one more filter used to narrow down the searches. We made a screen (which has been pasted below) which shows the new screens we have made. 


Instead of having the screen with just 3 choices, we made a screen which gives slightly more choices, based on proximity to the user. That way the user can make the decision of how far they want to go, while making the selection, rather than having a whole different screen to filter down the distance. 


The final screen has changed as well, which instead of giving a whole screen of information and an image, it now gives some basic details about the place, as well as a map with how to get there from your current location.


I feel like this development has improved our app greatly, by making it a lot quicker and easier for the user, as well as maintaining the good continuity in colour scheme and template as we already had.

We will look to meet up again before the final presentation and hand-in, so that we can find aspects which require further refinement and then refine them before the hand-in, so that the app is as good as it can be.
The link to the app hasn't been shared on this blog as we are still editting it further, so I will just post the final link in my final app design blog post.