-
Notifications
You must be signed in to change notification settings - Fork 1
Code Prototype, Usability, Code and Project Evaluation
This document contains discussion about our product prototype, usability test plan, usability analysis, and prototype and project evaluation.
We have decided to name our application 'GameManage'. We believe that this is a catchy brand name that instantly tells the user what the application is about. All the code of GameManage and its associated documentation can be found here in the GitLab repository.
We have used the Heroku cloud platform to deploy our application. We included HTTPS links to two deployed versions of GameManage below. The first version, with limited components, was used in our usability testing. The second and final version includes some additional components (e.g. carousel, various featured games, footer, updated avatar) to the user interface based on results from the usability test.
-
Deployed version of prototype used in usability testing: https://gamemanage-flask-app.herokuapp.com/
-
Deployed final version of prototype: https://gamemanage-prototype-app.herokuapp.com/
In this section, we discuss our usability test plan.
3.1 Current Usability Test Plan
We will select four users for the usability test. The planned demographic information about the users is detailed in Table 1. Most importantly, all of these users will have to be people who are at least somewhat familiar with playing games, whether it is a card game, board game, rpg, or others. We do not think it would be relevant to include people in the usability test if they do not have at least a basic experience with games. This is because our application's target audience is gamers who need a solution to manage their game collection.
We believe it is necessary to receive feedback from both a younger audience (i.e. 13-20 years old), and an adult audience (i.e. 21 years to 45 years old). This is due to our reasoning that a younger audience may have a different behavior than an adult audience with using web applications, and we want to ensure that our application is appealing to both subsets of our target audience.
Perhaps, there is a difference in user behavior across different genders (i.e. female and male). We want to make our application friendly to use for both genders. So, it is important to receive feedback from both female and male users.
We plan to select users who can read, write, and speak in the English language. This is because our application is designed with content in the English language, and it would be difficult for someone not familiar with the English language to use our application. In the future, we may consider developing different versions of the web application for users of different natural languages (e.g. Cantonese, Bangla, Farsi, Swahili) given adequate demand and adequate resources.
User | Age (years) | Gender | Language |
---|---|---|---|
1 | 13-20 | Female | Can read, write, and speak in English |
2 | 13-20 | Male | Can read, write, and speak in English |
3 | 21-45 | Female | Can read, write, and speak in English |
4 | 21-45 | Male | Can read, write, and speak in English |
Table 1: Demographic details of planned user cohort in usability testing.
We plan to arrange a video call individually with the selected users. Each user will be given a general overview of what she or he can expect in the usability test. The user then will be given a text file that contains the address to the deployed application, and a set of specific tasks. We will ask the users to execute those specific tasks. Each task will be broken down into steps of discrete activities to be done using GameManage, but without mentioning features of GameManage specifically.
We will spend 20 minutes with an user on average. Each participant will be asked to say what they are doing or trying to do, what they are looking at and what they are thinking, i.e. a "think-aloud" protocol. We will take pen-and-paper notes to record qualitative data. We will ask users to share their screens, and we will record the video chat. Both the notes and the video recording will be used for evaluation.
The quantitative metric that will be collected is an overall rating of the application from 1 to 5. A rating of 1 means that the application is very poor, and a rating of 5 means that the application is excellent. We will let the user decide on what criteria she or he may rate the application. This allows users to be open-ended with their answers and to express their opinions freely. The rating data will be averaged for reporting.
All the qualitative data (i.e. notes) will be analyzed to get further clarifications on different parts of the application. After evaluating all of the data, we will report on the top ten usability issues and the top ten successes of the application highlighted by our users. We will provide context on how the study went, and some qualitative details on the issues highlighted by the users, e.g., comments, quotes, recommendations.
Here are the list of tasks below that we will ask our users to execute during the user interface test.
Task 1:
-
After you land on the homepage, scroll through from the top to the bottom of the page. Do not click anywhere yet.
-
After completing the above, go to your collection.
Task 2:
-
Search for a game by title only (e.g. monopoly, uno, yahtzee, catan). From the results, add any game to your collection.
-
Repeat the above sequence for another game.
-
Repeat the above sequence for another game with additionally filtering for year this time.
Task 3:
-
Search for a game by title. From the results, click on any game to learn more about the game. Add the game to your collection if you are interested.
-
Repeat the sequence above for another game.
Task 4:
-
Go to your collection. Look for a particular game in the collection. Learn more about the game.
-
Repeat the sequence above for another game in your collection.
Task 5:
-
Go to your collection. Look for a particular game. Remove the game from your collection.
-
Remove all games from your collection.
Here are the list of short questions below that we will ask our users to answer after the user interface test.
Short questions to ask the user after user interface test:
[1] Overall, what would you rate this app from 1-5, and why?
[2] Do you like the design of the app? What can be better about the design?
[3] What other functionalities or features would you like to see in this app?
[4] Would you use an app like this for managing your game collection, searching for new games, connecting with other gamers, and joining gaming clubs? Why or why not?
3.2. Advanced Usability Test
We had a thorough discussion on how to design our usability test plan, mainly on complexity and on execution. We both agree that the usability plan was to have a plan that was realistic, i.e. a strategy that we can actually implement with some users to receive some early comments about our prototype. Therefore, we will focus on only testing with four different users.
For now, adding complexity to the actual test may not be useful for us as our features in the prototype are quite limited. Moreover, our time is limited, which will also limit how many people we can test the application with. Also, a user's time is limited, so we will try to keep the test as brief as possible to not bore the users and receive some quick feedback.
Although we decided to keep our usability test plan realistic and small in scale at the current stage, we would like to list out several possible ways of adding complexity to our test plan in the future.
Evaluate users with different level of experience of games
-
For the user cohort, we can add one more dimension of experience with playing games, i.e. Beginner, Intermediate, Advanced, Expert. For instance, users may be classified by number of years of playing board games. The purpose of this breakdown is to evaluate our app's usability from the target group, i.e. gamers with different levels of experience.
-
The issue that we may face is that games are a very broad level of categorization, and there is not a one size-fits-all measure here. For example, one user maybe very experienced in the Monopoly game, and not good in Uno. One user maybe very good in FIFA, and very poor in Yahtzee. So, it would be difficult to categorize users in broad levels of expertise, as their skills may be quite niched.
More quantitative metrics
-
Apart from qualitative measurement, we can also record more quantitative data for further analysis. For example, we can record time spent on each task, the number of games added, time spent on each page, and more. Although the quantitative data may not always be interpretable, we can further interview the user to learn more about her decisions when using the application.
-
We are aware that there is not much point in collecting user data if it would not be used for evaluation. For the actual testing, we do not have the resources to measure user data automatically (i.e. website analytics). We would need to integrate digital measurement technologies to our technology stack in the future to accomplish such a task.
In this section, we discuss our analysis from the results of usability testing.
4.1 Summary of Usability Testing
We have executed usability testing with four users (excluding developers). The developers (Tom Chan and Nabil Shadman) were not included in the usability testing due to reducing potential bias. Here are the details of the four users below.
User | Age | Gender | Language |
---|---|---|---|
1 | 31 | Female | Can read, write, and speak in English |
2 | 30 | Male | Can read, write, and speak in English |
3 | 42 | Male | Can read, write, and speak in English |
4 | 38 | Male | Can read, write, and speak in English |
Table 2: Demographic details of actual user cohort in usability testing.
We have arranged a video call with each user. They were given a brief overview of the test, and what they were expected to do in the test. Then, they were provided with a document that contained the address to the website (deployed in Heroku), and the set of specific tasks described in the Usability Test Plan. The users were asked to share their screen and describe aloud what they were doing as per test plan. While they were executing their tasks, we noted down our observations from both the live screen share and the comments from users. Per plan, the users were asked a set of brief questions to get further insights that we noted down as well.
We did not record the video calls with our users, deviating from the plan. This is because we realized later that our users may not feel comfortable expressing their opinions freely if they are aware that they are being recorded. As we wanted to get as much honest feedback for our prototype as possible, we did not record the calls. We realize that not recording the call may introduce some biases as we do not have an artifact to refer to if we are not sure what the user may have said during a specific moment just by looking at our notes. It could have been helpful to refer to the video call to extract further insights, which we may have missed in the actual meeting.
Our actual user cohort deviated from what we have planned. All our users were of the age of 30 years or above. We have planned to receive feedback from two users who were between 13 and 20 years old. Also, only one of the users was female whereas we have planned to test the application with two females. So, our actual user cohort for testing was biased towards adult male. This deviation occurred because we were not able to find any younger users and more female users during the window of our usability testing phase of the project.
We are aware that not having the equal representation of age and gender as we have planned biases the observations. For instance, we have missed the opportunity to receive feedback from a younger audience who may have a different behavior of using web applications. Similarly, we could have benefitted from receiving feedback from an additional female user, who may have had a different view of our prototype.
4.2 Discussion of Results
Overall, our users have rated the application 3.0 out of 5.0 on average. The users were aware that the application was a prototype, and they had accounted for that in their rating. The ratings have ranged from 2 to 4 (out of 5), indicating different levels of satisfaction of the users with the application.
Three of our users have suggested more visual content to be added to the user interface. This could be content such as animation, three-dimensional experience, colorful theme, images, videos, and interactive experience. User 1 mentioned, “Without visuals, the website looks sketchy”. User 2 mentioned, “The design is basic. It would be nice to have an attractive theme.” User 3 mentioned, “The graphic design could be better”.
Three users have recommended placements of videos in different locations on the website to enhance the user experience. For example, User 3 mentioned that a video could be placed in the homepage to explain how to navigate the website, and what features this website has to offer. Both users 2 and 3 have recommended placement of short video in the individual game profile page to explain the particular game. Both users 1 and 2 have mentioned live-streaming of gamers playing different video games such as League of Legends, Dota, or FIFA.
Three of our users have mentioned that they would like to see the game search engine enhanced further. User 1 thought that the search results were limited, and the feature could show more results. User 2 suggested having additional filters to search results such as type and number of players. User 4 would like to see autocomplete suggestions based on partial input. For instance, if the user types “mono”, the search bar can display suggestions such as “monopoly” and other matched games to aid him to complete the search. User 1 suggested displaying results based on generic keywords. For example, if the user searches for “card”, the search feature can display recommended card games.
We observed that some of the users were primed by the featured games in the homepage. Our deployed version used in testing only contained versions of Catan, which influenced those users to search for Catan in the search bar to learn more about Catan versions. User 1 was already familiar with Catan and mentioned that she was interested in purchasing another version of Catan. Overall, some users benefitted from some priming in the homepage as they indicated in their answers to the brief questions.
We have presented the top ten usability issues that our users highlighted with the application in Table 3. The issues were ranked based on how many users mentioned the same issue (i.e., Count). Any ties of counts were heuristically broken further based on how many times the users have mentioned it during the meeting, indicating relative importance of the issue.
Rank | Issue | Count |
---|---|---|
1 | The user interface needs more visual content | 3 |
2 | The app needs more video content | 3 |
3 | The search feature needs to have filters, autocompletions, and more results | 3 |
4 | The app needs to display information about other gamers | 2 |
5 | There needs to be social features to connect with other gamers | 2 |
6 | There needs to be information on gaming competitions | 2 |
7 | The app needs to show advertisements on sales, discounts, and limited-edition merchandise | 2 |
8 | The app needs to have various featured and recommended games | 2 |
9 | The app needs to keep a user in the same page after adding a game to his collection | 1 |
10 | There needs to be a remove-all button and checkboxes to remove either all or specific games in one click | 1 |
Table 3: Top ten usability issues highlighted from the usability test.
Our users have mentioned some aspects of the application that they have liked during the usability test as well. Our greatest success has been to hear from three users that they are interested to use a fully developed version of the application to either manage game collection, search for new games, purchase games, or connect with other gamers. The top use case of the application mentioned by the users was to manage games, which did not come as a surprise to us as it is one of the core features of our product.
Three users have also indicated that they liked the availability of various versions of the games. For instance, User 1 enjoyed seeing the various versions of Catan available in the homepage. She mentioned that she had the original version of the game, and she was looking to purchase another version. Additionally, users have also indicated that the website is easy to navigate. We were content to hear this as it was one of our core objectives when designing the user interface. As mentioned in our Requirements document, one of the issues that we have experienced when researching competitor applications is that their websites were difficult to navigate. We are joyful to have solved this issue for our users in the prototype.
Below in Table 4, we have presented the top ten successes of our prototype mentioned by our users in the usability test.
Rank | Success | Count |
---|---|---|
1 | Interest in using a fully developed application to manage game collection | 3 |
2 | Availability of various versions of games | 3 |
3 | Website is easy to navigate | 2 |
4 | Search feature is easy to use | 2 |
5 | One-stop solution for managing game collection, searching for new games, and connecting with gamers | 2 |
6 | Interest in purchasing games from a fully developed application | 2 |
7 | Layout of the homepage with featured games | 2 |
8 | Catalogue or grid display of the games | 1 |
9 | Confirmation message that a game has been added to collection | 1 |
10 | Website is fast | 1 |
Table 4: Top ten successes highlighted from the usability test.
4.3. Impact on User Interface Design
Even though we have tested our prototype with only four users, several issues have been highlighted, which give us much insights to work on the prototype to develop it into a production quality application. Fortunately, we were able to make some quick changes after our usability test to improve our prototype before submitting our work to the Product Owner for review. For instance, we have added a carousel in our homepage with images and brief descriptions of the website. This update partly solves the issue of limited visual content in our earlier version of the prototype.
Our next steps in the UI development is to focus on what our users have highlighted during usability testing. Our ranking of the top issues in Table 3 will aid us in prioritizing our work. For instance, our initial work will be focused on adding more visual content to our application to make it more appealing to our users. Then, we will work our way down in the list of issues to add more video content to the application, enhance our search engine, and so on. With this approach, we believe that we can provide the optimal user experience to our target audience given our resources and time.
In this section, we evaluate our prototype and our project.
5.1.1 Requirements Review
After a careful review of the requirements, development plan, and the product brief, it is clear that we do not have the time and the resources to complete all of the requirements of the prototype as discussed in our Requirements documentation. We are aware that the prototype does not need to be a complete implementation. Ideally, we would have liked to complete all of the requirements as prioritized with the MoSCoW method if we had a larger development team.
Our strategy was to have written enough code, which was functional, and which could be used to evaluate the viability of the design and of the core features, i.e. game search engine and game collection. Therefore, although most of the requirements are not fulfilled in our prototype in the current stage, this is as anticipated and we are confident that we can implement a complete production quality application based on the prototype's evaluation.
Here is a summary below of what we have been able to complete thus far in our prototype:
- All "Must have" functional requirements have been completed.
- "Must have" non-functional requirement is fulfilled, i.e. a user can add multiple types of games into her collection.
- Only one functional requirement in "Should have" has been completed, which is to allow a user to search for games she does not have.
- "Should have" non-functional requirement has not been completed.
- "Could have" functional and non-functional requirements have not been completed.
5.1.2 Design Review
5.1.2.1. Technical Review
-
Flask and Bootstrap as main development tools
According to what we have planned for the prototype, we have implemented the code using the Python programming language with the Flask web framework in our backend. For the frontend, we decided to use HTML and Bootstrap (a CSS framework directed at responsive frontend web development) to lay out frontend components [5]. The look-and-feel of the UI is not perfect according to our UI design. Our strategy here was to get similar components in place from the Bootstrap library.We evaluate that our combination of web frameworks (i.e. Flask and Bootstrap) is very efficient to develop a prototype with a certain degree of extensibility. We can easily maintain and refactor the code. The drawback is that they are both third party libraries, which are relatively new to us considering our experience. Moreover, the usage of these external libraries create dependencies on them, which require us to monitor if there are any bugs or updates. We need to regularly monitor that our application is deployed appropriately.
-
Usage of Board Game Atlas API
Originally we planned to develop a database system as the first step, but we have not done it. Instead, we used the Board Game Atlas (BGA) API as our content source [6]. The API has a large collection of board and card games. We worked on learning how to use the API and we discovered that it was quite simple to use. The outcome is satisfactory to us as the data is very relevant to our application, and we have saved plenty of time on developing the database system required and putting the relevant game data into it.As indicated in the usability test, our users would like to see more games in our search engine. In the future, we aim to integrate data from multiple APIs to include various types of games, e.g. rpg, video, e-sports, learning. We have already identified some APIs that are relevant for this purpose. Some of those APIs are BGG XML API2 [7], IGDB API [8], and Riot Games API [9]. We will continue to integrate more APIs as we deem appropriate for our users.
We will create an Extract-Transform-Load (ETL) pipeline where we will clean the data from the APIs to make it appropriate for our application. These would be tasks such as making game type classifications more specific for our users, weighting user ratings from different websites, accounting for ratings from our own users, and more tasks to make the data more helpful for our users. The cleaned data will be persisted in our database for rapid querying.
-
Deployment in Heroku
We thought that it was beneficial to deploy the application in IBM Cloud, AWS, Heroku, or a similar cloud platform to let users easily access and test the app instead of going through the steps to install the app in their local machines. We were able to deploy the app for a free tier in Heroku and it made our usability testing much easier for our users indeed [10].
5.1.2.2. UI Review
-
Usability Analysis
Our usability analysis provides with several insights to improve the user experience for our users. As mentioned above, we would focus on the top usability issues highlighted in the test when developing our prototype into a production quality application. -
Post-Usability Testing Updates
We have already worked on improving some parts of our user interface based on results from the usability testing. For instance, we have added a carousel at the top of our homepage to visually describe what our application is about (Figure 1). The post-usability user interface updates are available to view in our final deployed application in Heroku.We have also added various featured games to our homepage (from only showing different versions of Catan in the previous iteration). This will allow users to generate more ideas on what games to look for when searching for games (Figure 2).
Figure 1: Carousel added to homepage of GameManage web application.
Figure 2: Various featured games added to homepage of GameManage.
5.2.1 Development Plan Review
5.2.1.1. Time Estimation
Overall, we followed the timeline that we had in our development plan (phase 3 with the Gantt chart diagrammed). We started coding in Week 10 of the course schedule and worked until end of week 11 to have a functional system with some basic features. While doing this, we also worked parallelly to have a usability test plan in place. In week 12, we executed our usability test with the code we had that point and analyzed the usability test results. Also, in the same week, we developed the prototype further introducing more features, documenting code, reviewing code, refactoring code, running tests, and fixed bugs. We stopped all development work at the end of week 12.
We noticed that our time estimates were quite accurate when implementing the components. We did not deviate much from our estimated times. Of course, we understand that there is generally some level of randomness in actual implementation times of projects. From the start, we had the project risks in our mind and we communicated frequently to address any issues early. In further development, we aim to to continue to be cautious about actual implementation times to mitigate risks of project failure.
5.2.1.2. Extensibility Review
As we have mentioned above, the prototype has fulfilled the "Must Have" requirements and has been tested with several users. There are more core features to develop the prototype towards a production quality product. The features are summarized below.
(a) Game review feature
(b) Game rating feature
(c) Forum feature
(d) Shop feature
(e) Game recommendation engine
(f) Player search engine
(g) Club search engine
Indeed, the BGA API we are currently using has review, rating and forum data that are ready to use. We can leverage on the data again and build a more mature version of the prototype. However, we understand that when we shift from the prototype to the production-ready app, we need to build our own data warehouse for persistence and rapid querying. As per our original design, we will create a REST API to feed the relevant data to our frontend.
Another key potential development is the recommendation engine of games, which should be based on a machine learning algorithm. This part could be included in later stage, as a production quality product is good enough even without a machine learning based recommendation system. We plan to initially have a naïve recommendation algorithm, i.e. recommending top games added to the collections of our users.
5.2.2 Team Review
Although the Gantt Chart shows the division of labor on each task, e.g. Tom Chan is accountable for usability related tasks, it made sense for both Tom Chan and Nabil Shadman to code the prototype together and path the usability plan parallelly. This was aligned with what we have done earlier in the project. We communicated frequently, reviewed each other's work thoroughly, and iterated on any opportunities for improvement to our project.
As we have been working as a 2-person team, the development is challenging, but not impossible. We are fully aware of our limitations, and what we could achieve more if we have more resources and time. We think such a situation is unique to our own experience, but such situations are quite common in the broader world of software development.
Overall, although the product has been partly completed, it is consistent with our expectations. We have closely communicated on each part and were open to share our views with each other. Indeed, communication might be easier for a 2-person team than a 5-person team. We understand the merit of good communication and will bear this in mind when participating in project development in the future.
5.2.3 Risk Assessment Review
So far our risk assessment has been successful to identify certain risks that we are exposing to. For example, we recognized the risk of having high level of technical complexity and we have chosen to leverage third party libraries as much as possible to develop our prototype. In addition, we are also aware that altering our development plan may lead to other risks that we have not considered before. Below are some additional risks that we would like to point out:
(a) Third party libraries maintenance risk
(b) Compatibility risk when transitioning from third party APIs to owned database with API access
(c) Data collection risk (e.g. copyright issue)
We will continue to review our risk assessment, especially when there are any changes to the requirements, design, development plan or any key part of our project.
[1] https://www.imaginarycloud.com/blog/flask-vs-django/
[2] https://hackr.io/blog/flask-vs-django
[3] https://www.guru99.com/flask-vs-django.html
[4] https://trio.dev/blog/django-vs-flask
[5] https://getbootstrap.com/
[6] https://www.boardgameatlas.com/api/docs
[7] https://boardgamegeek.com/wiki/page/BGG_XML_API2
[8] https://www.igdb.com/api
[9] https://developer.riotgames.com/
[10] https://www.heroku.com/