Great User Experience in E-Commerce starts with understanding customers!

How can we make an online shop that stands out better than overflowing competitors? Indeed online experience is offering useful information such as a comparison among several products with reviews. Researching became a typical behaviour for customers before making a decision. And there are even more interactions than only selling products on online shops.

In this article, I will describe the key practices how to capture customers needs in E-commerce by understanding them.

Continue reading about Great User Experience in E-Commerce starts with understanding customers!

Tags: , ,

Accessibility: make your website barrier-free with a11ym!

Accessibility is not only about people with disabilities but also about making your website accessible to search engines robots. This blog post shares our experience with making the website of a famous luxury watchmaker more accessible, an important e-commerce application. We have built a custom tool, called The A11y Machine to help us crawl and test each URL against accessibility and HTML conformances. Less than 100 hours were required to fix a thousand of strict errors.

Issues with unaccessible application

Accessibility is not just a matter of helping people with disabilities. Of course, it is very important that an application can be used by everyone. But, have you ever considered that robots are visitors with numerous disabilities? What kind of robots you may ask, well: Search engine robots for instance, like Google, Bing, DuckDuckGo, Baidu… They are totally blind, they don’t have a mouse, they don’t have a keyboard, they must understand your content only with the source code.

Whilst it is easy to consider a man or a woman with a pushchair having a bad time in public transport, someone color-blind, a widespread disability, could also have issues browsing the web.

Continue reading about Accessibility: make your website barrier-free with a11ym!

Tags: , , , ,

An opensource Drupal theme for the Swiss Confederation

After having contributed to the official styleguide of the Swiss Federal Government and having implemented it on a couple of websites, we decided to go further and bring these styleguide into a theme for Drupal, a well-known, pluripotent and robust CMS we implement regularly at Liip.

Screenshot of Drupal theme for the Swiss Confederation

Continue reading about An opensource Drupal theme for the Swiss Confederation

Tags: , , ,

Collaboration Over Contracts

For many businesses, the web has become a platform of strategic value. Their web-based services are under constant development, so they can cater for the changing needs and requirements of customers, partners and other stake holders. It is their constant learning and improving that sets these organisations apart from the competition.

Continue reading about Collaboration Over Contracts

Multi-Device Interactions – Part 3 : The Canvas

This is the last and final part of the blog post series on Multi-Device Interactions. Previously, I outlined the second-screen trend in TV industry (Part 1) and introduced some underlying models in our multi-device world (Part 2).

In this blogpost we (finally) focus on the practicalities of Multi-Device Interaction Design. It indeed has become a challenge for User Experience Designers to develop solutions that account for the multi-device behaviour of today’s user. As mentioned earlier, we have developed a canvas to think and design multi-device interactions. The Multi-Device Interaction Canvas (MDIC) is a modifiable and simple canvas to map multi-device use cases. It bases on the theoretical models we presented in previous blog posts.

At its core it respects three important factors:

  • Interactions (with Devices)
  • User Tasks
  • Context

Activity mapping with the canvas

Have a look at the canvas example. As you can see, the “interaction” zone composes the core of the MDIC canvas. In it, you diagram the different interactions with the devices that can occur. Besides the regular interaction with the common gadgets such as smartphone, tablet, laptop or TV there are two other options we suggest. Throughout the day, we might encounter public screens at the train station or at school. The section “other interactions” is designated for any objects or device you interact with that also might grab your attention – be it auditory, visual or haptic. Imagine you are in your kitchen and besides your tablet with the recipe, you use a knife to prepare your ingredients. Driving to work by car could be another example (steering wheel). Practically any other activity that does not fit with a specific screen listed but still occupy your sensory channels so to say. List the user’s main activity or description on top.

We provided you with an example. The legend in the left corner shows you the different interaction types (hereafter presented).

A task is either unfinished or completed. Think of reading your favorite news on your tablet for a couple of minutes. Then, you move on to a next task. In this case you would assign filled bullets to this activity, to state that it started and stopped at defined times. Oppositely, if an action starts on one device but shifts to another device or gets interrupted, you should use open bullets in-between. It is that easy.

In using dashed lines you signal a device shift or complementary usage for a same task. Maybe you listen to music on your tablet and then move to your smartphone since when leaving the house. Use the dashed lines for this. Also whenever a task is shared (complementary). For example in our scenario, TV ignites the search for a related actor on Daniel’s smartphone. It is a complementary task.

If your persona engages in multi-tasking, more continuous lines appear on top of each other. The thicker a line, the more attention is directed towards this specific activity.

So far so good. What about the context? The bottom section is there for this purpose. As the user’s location changes, so do his interactions with the devices. We have based ours on the social presence of other people and whether privacy (private vs. public) is given. Also, we include mood types and location specifications. Feel free to adapt the contextual dimensions.

Think Multi-Device Interaction through the Canvas

A simple watchdog is to never have more than four tasks in parallel. Keep in mind that the user’s attention is much limited. There are a numerous of problems that can be spotted here. For example due to the fact that our user switches devices many times, the main activity gets disrupted. Look for disruptive factors. How long does it take a user to finish a task – basically the length of an activity path from open to filled bullet. Or what about device switches? Is there a cost of shift involved? It could be that the next device is simply not in reachable distance.

The canvas will help you to ask the right questions. What if our persona used the tablet and not the smartphone for a given task? Can this action be continued or how likely is it to occur in parallel? What happens, if the context changes and our user spends time with his friends? Is the user interested in social or investigative information engagement (two different types of processing with the content, see bog post two). Ask yourself whether complementary issues can occur e.g. how to actually search for the actor during a film? Does the user actively search for this information or does the TV channel provide him with hints or even better, companion applications.

Similar to the well-known business model canvas, it can be used in very different manners.

The empty Canvas can be downloaded here.

To give you an idea how to use the canvas, have a look at our example scenario here.

Put your ideas and scenarios to practice and “think multi-device” from day one. We hope you like our tool. Feel free to send us feedback or improvements.

 

Tags: , , , , , , , , ,

Multi-Device Interactions – Part 2 : The Models

In our last blog post we started off with John’s story to show the everyday encounter of multiple devices and screens, and outlined the emergence of the second screen business. The classical second screen solution is a companion app for mobile devices that delivers additional information to TV content, e.g. a quiz or sport statistics on your smartphone or tablet. With all the possibilities in a multi-device world, it’s crucial to focus on the conductor of all these instruments – the user! In the following sections we dive into some theoretical models on multi-device interaction.

Why would the user choose one device over another, pursue one activity over another? How important is the context and what other factors contribute to a specific user-behavior pattern?

Let’s have a look at the context which in many cases influences the choice of device.

Context of use

Digital devices are being used in different social environments, as our story of John tries to transcribe. The context of use is key to the selection of a device. It makes indeed a big difference whether you sit in a coffee shop, chill out at home on the couch, or are with friends at a bar watching a football game.

What are the key factors which drive the preference for a device over others? Google identifies four of them:

  • Time: How much time is available and needed for a given task?
  • Goal: What is the defined intent, goal or task?
  • Location: Where is the user and which devices are physically close?
  • Attitude: How does the user feel and what is going on in his mind?

In our example scenario, John is on the go, riding the bus from his office to the shop during rush hour. This doesn’t allow him to engage in longer tasks as many distractors and possibility of disruptions are present. Here, the smartphone is the right device. But later at night, relaxing with Michelle at home, it’s a much different setting with respect to the four factors above: time is available, John and Michelle are in a relaxed mindset, and start discussing vacation ideas. Their goal is to find more inspiration for their trip, the casual browsing task that can easily be completed from the couch, at home. A tablet is a perfect match for this purpose.

The right device

Google attributes different characteristics to each device. Whereas PCs and laptops are the primary tools to be “productive and informed” and mostly used at home or at the office with the idea to pursue goals that demand time and focus, smartphones are the connecting devices. Usually used on-the-go as well as at home, and best for communication and connecting with others. Often, smartphones are of course used when time is sparse or information needs to be accessed quickly e.g. small tasks. And tablets? Entertainers, clearly. 70% of the time, Tablets used is at home and mostly for browsing and entertainment, when time is largely available.

A Consumer in the report sums up as following:
 
“My phone… I consider it my personal device, my go-to device. It’s close to me, if I need that quick, precise feedback.
 
When I need to be more in depth, that’s when I start using my tablet. The other part of it is where I disconnect from my work life and kind of go into where I want to be at the moment… I’m totally removed from today’s reality. I can’t get a phone call, I don’t check my email it’s my dream world.

And then moving to the laptop, well, for me that’s business. That’s work. I feel like I’ve got to be crunching numbers or doing something.”

Bradely, Google Report 2012.

Microsoft defines different archetypes to understand marketers how users relate to their devices. It’s a form of labeling six diverse user types.

The Everyman: TV as one of the most popular devices in our multi-screen world that delivers passive entertainment and comfort
The Sage: The laptop informs, empowers and teaches – clearly key to productivity
The Jester: The Gaming Console immerses consumers in another world
The Dreamer: E-Readers help us to escape in the world of books and are mainly used for this purpose only (despite the fact that some of them provide internet browsing too)
The Explorer: The Tablet facilitates discovery and investigation and is a great device on the go and rich in media and video
The Lover: The Mobile phone is the most personal device and evokes intimacy, commitment and trust – however, its downside is the constant demand of the user’s attention

Now that we might understand how a user comes to choose one device over another and how he relates to these devices, we may think about how to interact with a multiple of devices.

Modes of multi-screening

Screens can either be used sequentially or simultaneously (Google, 2012).

Sequential device usage occurs when one task is initiated on device A and finishes on device B. One common example is browsing the web for shoes and bookmarking the interesting products on a smartphone, to later purchase the item on a laptop. In John’s story, many sequential actions happen. For instance the vacation trip planning is an activity that goes through different devices throughout the day. Based on numbers from the Google report, over 90% of users indeed engage in a sequential use of devices to accomplish a given task the same day. It is thus not astonishing that Google launched a new Adwords tracking measurement for marketers in early October 2013. With the “Estimated Total Conversions” cross-device conversions can be calculated.

Parrallel device usage. Opposed to the sequential mode is the parallel use mode. Google speaks of simultaneous use when multiple screens are active and information on the second screen is either related or unrelated to a main screen. The report distinguishes between multi-tasking (unrelated activity) and complementary usage (related activity). Clearly, multi-screening with a smartphone and TV is ranked as most frequent activity among the users in their study, followed by smartphone & laptop. According to their research, emailing, browsing and social networking are the most performed task during simultaneous screen usage.
 
Even though multi-tasking and juggling different activities at the same time has been stated to mainly induce negative effects on performance and accuracy of a given task (Rachel et al, 2011), 78% of the participants in the Google Multi-Screen World study perform multi-tasking (simultaneous usage). Complimentary usage was thus only 22%. It turns out that 77% of TV viewers use another device at the same time while watching Television. Often TV ignites search; at least for 1/4 of search inquiries occasions that are prompted by television (Google, 2012).

There are other approaches to defining modes of use in a multi-device world. Here is another one.

The four paths of engagement

Microsoft on the other hand defines the mode of use with four “paths of engagement” with devices that partially overlap with Google’s definitions of sequential and simultaneous use.
 
1. Content Grazing
2. Investigative Spider-Webbing
3. Social Spider-Webbing
4. Quantum

Content Grazing is the classical distractive behavior when using multiple screens. It can be either related or unrelated to the content on the primary screen, similar to Google’s definition of simultaneous use. It’s often habit driven about small tasks running in the background. Think of Michelle texting back and forth with Sarah during the movie.
The Investigative and Social Spider-Webbing paths of engagement are about consuming content that is clearly related to a primary screen. Microsoft’s distinction between investigative and social are straightforward: Investigative Spider-Webbing happens when moments of curiosity or knowledge seeking trigger a search action. Michelle’s interest to find additional information about the movie star is a good example. Social Spider-Webbing on the other hand, is about social engagement in forms of conversations and connecting to like-minded individuals. Let’s say a tweet or online discussion about content on a primary screen (Microsoft, 2013).   

Quantum Tasking is the equivalent of Google’s sequential use. Here, intended tasks travel over space and time from screen to screen. Meaning big tasks are often divided in subtasks. What the report also states is that, when it comes to shopping, spontaneity plays an important role that mostly is present while using a smartphone. In our story it’s when John purchases the flight tickets, remember? At home, clearly PC or Laptop is the leading device. Nevertheless, 67% of the studied users started shopping on a smartphone and accomplished the goal on a PC/Laptop (Google, 2012), just like John did.

With all the different modes of use mentioned, users and solution providers started to care a lot about information and interaction orchestration. No wonder some theoretical concepts for screen coordination have thus been developed.

Screen coordination

In Ecosystems of screens, PRECIOUS DESIGN STUDIO documents six patterns for screen coordination

Coherence (appearance and functionality is coherent across different devices)
Syncronization (data gets syncronized)
Screen sharing (multiple screens share a single source)
Device shifting (possibility to actively shift content from one device to another)
Complementarity (the classical TV companion app)
Simultaneity (devices display similar content simultaneously)

The strategies defined, should help to understand and describe our multi-screen world. Here is an example: When John is working on his report, his data is synchronized and displayed coherently on different screens. While watching TV, the companion app Michelle uses is most likely simultaneous and complementary. Device shifting occurs, when John preselected a song on one device and later streamed his music library in the gym.

Summary

The trend towards a multi-screen world is emerging and is just about to become mainstream. How we interact with multiple screens is an art for itself. Different modes of use, contextual factors and screen coordination strategies have been outlined and are crucial but must always be considered together, in a holistic approach. The user acts as the “conductor of an orchestra of devices” that plays a hopefully “harmonious experience tune” that some user experience architects designed.

To to do so, we can cut down to four main factors that contribute to the understanding: 

Context
User
Activity
Screen and Interactions


 
In a next post we’ll introduce the Multi-Device Interaction Canvas – a tool to model and think multi-device interaction scenarios in a simple and convenient way.

Tags: , , , ,

Multi-Device Interactions – Part 1 : The Second Screen

This article, the first in a series on multi-device interactions, introduces the concept and analyses existing second screen solutions from the broadcasting industry.
Let us start with a (not so) small introductory story (or directly check out the main part).

It’s 6pm and John shuts down his desktop computer at work. He has been writing all day on the yearly financial report to hand in next week and it’s not done yet. Of course, things are available in the cloud and he can continue later on from anywhere. John is running late and leaves the office in a hurry (as usual): grocery shopping before the shops close, meet his wife Michelle at the coffee shop, make a dinner reservation for the next day and send a first draft of his report to the management. John is far from ‘done with his day’.

6:10pm. He catches the bus. It’s rush hour downtown. Arriving at the deli shop ten minutes later, he takes a look at his shopping list he wrote down on paper earlier during the day. His wife texted him to buy this noble vinegar from Italy but John can’t recall the name and checks it quickly on his smartphone. He shortly checks e-mails on his various mail accounts – a bad habit. Paul, a good friend, had sent him some notes and links for a road trip to southern France in spring, which seems like a perfect idea! John therefore creates a to-do item in his favorite app.

7pm. While walking to the station, John makes use of the idle time and calls the restaurant to make a table reservation for the next day. Ten minutes later, he meets his wife at the coffee shop. They discuss the road trip plan for next spring. The smartphone comes handy to take notes and dive into some first research for cheap flights. Bookmarked!

8pm and at home, they hang out on the couch and browse the web on their tablet with the sunny pictures from southern France to get some inspiration. TV News is rambling in the background. Oh well, it’s about time John finishes up his report and delivers a first draft!

10pm and finally done with his draft. Shutting down the laptop. After this late work shift, John feels the urge to go to the gym. The music on his smartphone keeps him going, pushing the weights. One hour later, John and Michelle relax and watch a movie on TV. Michelle is curious about the movie. She launches the companion app to read about the actors and the storyline. All of the sudden, a text message from a friend is interrupting. The smartphone is in close distance, why wouldn’t she take a look? It’s Sarah! sending a photo of her new, red shoes. Of course Michelle wants to know all about it. After messaging back and forth, Sarah shares the link to the web shop with Michelle. She bookmarks it instantly with the intention to purchase the same shoes, but in blue, later.

11pm. The movie still running. Michelle follows up with friends she had missed out on her smartphone. A Meanwhile, John purchases the flights he had bookmarked earlier this day. On his laptop it’s just a few clicks. he shares the news with Paul via text message.

It’s been a long and rough day. John sets his phone alarm clock to 7am. He will wake up with a terrific fresh song he discovered on the daily bus ride yesterday. An exciting, brand new day is waiting.

Introduction

Every minute of our lives we face and interact with many screens for many more purposes. Just like John. A large portion of us owns multiple connected devices (Digital Tsunami, 2013; Mobify, 2012), and uses them together. Think of surfing the web on a tablet while watching television, or listening to music through phone and tablet, in an interchangeable manner. Bridging information through cloud services has already become indispensable to our daily routines: emails, calendars, music, documents, …

In such a multi-devices environment, it becomes complicated for solution providers to ensure the consistency and quality of user experience (Janelle Estes, 2013). Similarly to the conductors of an orchestra, we daily interact with different ‘instruments’ toward a common goal. When, with which device and in what circumstances a user is doing what activity is a tough yet essential question to answer.

These days, novel and interesting ways of (social) engagement through multi-screen interaction are emerging, first and foremost in the TV industry. In fact, the market has evolved at a fast pace towards the new world of multi-screen digitalism already. With up to 90% of media interactions being screen based (Google, 2012) and mobile data traffic on high-rise (Cisco, 2013), the foundation for prospective innovation in multi-device interaction world is laid.

Second Screen and Television

The television industry was first to coin the term “second screen”, referring to applications that go hand in hand with a primary screen: the TV. Consumers better know it under the name “companion devices” – an application that provides supplementary information to the content displayed on the primary screen. But there’s clearly more to it than just two screens.

A current report from Business Insider explains “Why the second screen Industry is set to explode”, referring to the very frequent use of mobile devices while watching TV (see Business Insider, 2013). It seems to be one of the most popular side activities of this mobile era of broadcasting. According to a survey conducted by Nielsen, about 86% of tablet and smartphone users engage in multi-device activities while watching TV (Nielsen, 2012). And here, the second screen apps act in bridging gaps between the user and media content. Often these apps provide incentives in forms of social media engagement or additional, in-depth information. For the later, think of the sport game companion application from Sky Sports that feeds users with live statistics and numbers while streaming a football game on a first screen. The app includes recent player transfer deals and historical data on previous team performance to name a few examples. In formula one on the other hand, users can take a more active role and customize their TV experience in switching live camera views of cockpits during the race. In fact, the implementation of such applications turned out to be a success (BroadbandTV News, 2013).
When it comes to second screen social engagement, according to eMarketer, one sixth of the audience actually shows social media activity about the content they consume on TV (Forbes, 2013). Many shows nowadays contain live feeds from social media channels such as Twitter. NBC’s Second Screen App for The Voice for example uses a live voting system and enables fans to engage with celebrity judges while broadcasting (Simply Measured, 2013). Another companion app made by NBC for the Million Second Quiz where viewers can play the quiz in parallel to the show and enter competitions (Million Second Quiz, 2013). There are many more second screen products available that provide some sort of engagement for users (e.g. Zeebox App)

Of course, one could argue that these second screen applications are specifically designed for the US and UK market and that the underlying concepts might not be applicable to a culturally different market. This observation is very correct. However, when having a look around in the swiss TV industry, a recent surveyshows that up to 76 percent of TV consumers use a second device connected to the internet while broadcasting (werbewoche.ch, 2013). Also, SRF launched a companion game app for the quiz show Millionenfalle (App) in November 2013. And television pioneer channel Joiz is all social TV since 2011: users can check-in to programmes and message with community members, respectively win goodies for engagement and social interaction activity.

Beyond the Screen: the User

The TV world with social engagement incentives is important, let us however refocus on the user. After all, when engaging in multi-screen activities, the user attention is split and every additional interaction is source of distraction. We have to understand the underlying reasons that drive the user towards one device over another, to pursue one activity over another, in what context and why. This leads to a number of questions that are important to study in multi-screen interactions. We’ll find possible answers to these questions based on a variety of different sources in our next blog post.

Tags: , , , ,

Open Badges – Certificates for Today

It feels a little weird now, that when I heard about Open Badges a year ago at the MaharaUK conference* I didn’t really get what it was about. It is actually an intriguingly simple concept: A certificate issued online for achievements of any kind, professional or vocational, small or big. This certificate comes in the shape of a graphical image you can display on your blog, facebook, linkedin, e-portfolio, lms profile page etc. This “graphical image” is the badge. Clicking on the badge shows you information about the achievement behind it, who issued it when, possible expiry and a link back to the badge issuers website. On the badge issuers page you will find more information and verification about the reasons and the validity of the issued badge.

This system allows for a much more complete picture of your learning than the diplomas and certificates we are used to ever could. You can group your badges, to provide insight into your soft skills achievemts, like communication or leadership skills, specific technical achievements in a programming language for example, vocational achievements such as sports awards etc. Now if you put yourself in the shoes of someone working in recruiting, you can imagine the usefulness of getting access to a potential employees badges, grouped to fit the application submitted. You can actually verify the information contained in the badges and you get access to a much more specific and – at the same time – broader picture of an applicant’s skills.

The place to keep your badges is in the Badges Backpack provided by the Mozilla Foundation, the creators of Open Badges. The Backpack is the place where you can put your badges into groups and manage privacy settings of your badges.

open badges explained illustration

Illustration taken from “Open Badges One Page Summary” courtesy of the Mozilla Foundation https://wiki.mozilla.org/File:OpenBadges_–_One-page_summary.pdf

It helps to understand the three main roles in the Open Badges Infrastructure (OBI); the earner, issuer and displayer. The earner is you and me, the issuer can be an institution, organisation, company etc. using tools like Moodle or Totara learning management systems or credly.com to issue the badges. Credly provides a service for institutions or individuals not using Moodle or Totara to provide badge issuing, display and the actual creation of badges.

The integration in Moodle and Totara makes it very easy to set up badges (provided you already have a graphical image for your badge). You drag the image into the designated area on site or course level, you enter a title, description, duration… All this will make up the meta data of the badge and you’re pretty much done. You can then decide on the criteria of how the badge can be earned.

There’s a pretty cool tool to create badges online too: openbadges.me

What helped me better understand the benefit of Open Badges is to see the system in action from the Smithsonian American Art Museum, take a look.

Badges come in the png format, with the meta data embedded in the form of json blobs. This means badges and the information associated within them can easily be downloaded and uploaded. It is however a format meant to live on the internet, i.e. in a digital environment and there’s currently no easy way to display and maybe print the information contained in the badge once it is downloaded as far as I know. The most valuable information in a badge is linked from it rather than embedded, pointing back to the badge issuers site, adding authority to your badge.

Although not central, there is an element of gamification in Open Badges. They can encourage a competitive element, the element of pride in what you achieved when you’re displaying your badges as you would your trophies or cloth badges earned in a swimming course, the scouts or at a Northern Soul night.

Open Badges is a great initiative from the Mozilla Foundation and I’d like to thank them for it.

I am also much obliged to Richard Wyles and the team at TotaraLMS and Mahara.org for bringing Open Badges to my attention and for the excellent integration work they put into Moodle, Totara and Mahara.

  • This year’s MaharaUK conference is on in Birmingham on July 4th 5th

Tags: , , , ,

Table Inheritance with Doctrine

Introduction

Lately we had several projects where we had to store in a database very different items that shared a common state.

As an example take the RocketLab website you are reading: Events and BlogPosts are aggregated in the LabLog list as if they were similar items. And indeed they all have a Title, a Date and a Description.

But if you get the detail page of an Event or a BlogPost you can see that they actually don’t contain the same information: a BlogPost contains essentially formatted text when an Event contains more structured information such as the place where the event will take place, the type of event it is, if people need to register to attend, etc..

Still we have to access those entities sometimes as similar items (in the LabLog list) or as different items (in the events list and in the blog posts list).

Naïve database model

Our first idea, and it was not that bad, Drupal does just the same, was to have a database table with the common fields, a field containing the type of item (it’s either an event or a blog post) and a data field where we serialized the corresponding PHP object. This approach was ok until we had to filter or search LabLog items based on fields that were contained in the serialized data.

Indeed SQL does not know anything about PHP serialized data, thus you cannot use any of it’s features on that data.

So how do you get all the LabLog items that are Events, happen in April 2012 and are “techtalks”? The only way is to go through all the Events records of April, unserialize the data and check if it’s a techtalk event. In SQL you would normally only do a single request to find those items.

A better database model

There is a better way to model this in a database, it’s called table inheritance. It exists in two forms: single table inheritance and multiple table inheritance.

Multiple table inheritance

Multiple table inheritance requires to use three tables instead of a single one. The idea is to keep the common data in a “parent” table, which will reference items either in the Event table or in the BlogPost table. The type column (called the discriminator) helps to find out if the related item should be searched in the Event table or in the BlogPost table. This is called multiple table inheritance because it tries to model the same problem as object inheritance using multiple database tables.

Multiple table inheritance

When you have a LabLogItem you check the type field to know in which table to find the related item, then you look for that item with the ID equals to related_id.

Single table inheritance

Alternatively the same can be modelled in a single table. All the fields are present for all the types of LabLogItem but the one that do not pertain to this particular type of item are left empty. This is called single table inheritance.

Single table inheritance

Single or multiple table inheritance

The difference is really only in how the data is stored in the database. On the PHP side this will not change anything. One may notice that single table inheritance will promote performance because everything is in a single table and there is no need to use joins to get all the information. On the other hand, multiple table inheritance will allow a cleaner separation of the data and will not introduce “dead data fields”, i.e. fields that will remain NULL most of the time.

Table inheritance with Symfony and Doctrine

Symfony and Doctrine make it extremely easy to use table inheritance. All you need to do is to model your entities as PHP classes and then create the correct database mapping. Doctrine will take care of the hassle of implementing the inheritance in the database server.

Please note that the code I present here is not exactly what we use in RocketLab; we are developers and as such we always have to make things harder. But the idea is there…

The parent entity

In the case of RocketLab we created a parent (abstract) entity, called LabLogItem, that contains the common properties.

There are several things to note about the mapping:

  • @ORMInheritanceType: indicates that this entity is used as parent class in the table inheritance. This example uses single table inheritance, but using multiple tables inheritance is as easy as setting the parameter to “JOINED”. Doctrine will create an manage the unique or multiple database tables for you !
  • @ORMDiscriminatorColumn: indicates which column will be used as discriminator (i.e. to store the type of item). You don’t have to define this column in the entity, it will be automagically created by Doctrine.
  • @ORMDiscriminatorMap: this is used to define the possible values of the discriminator column as well as associating a specific entity class with each type of item. Here the discriminator columns may contain the string “event” or “blogpost”. When its value is “event” the class Event will be used, when its value is “blogpost”, the class BlogPost will be used.

Basically that’s the only thing you need to use table inheritance, but let’s have a look at the children entities.

The children entities

We have two regular entities to model the events and blog posts. Those entities extend LabLogItem.

There is not much special in the children entities. An important thing to note is that the common fields defined in the parent entity LabLogItem SHOULD NOT be repeated here. Also you may notice that there is no annotations in the children such as @ORMEntity to indicate that they are entities. Indeed they will inherit the annotations of LabLogItem and become entities.

From now on, when you create a PHP object of type Event and ask the entity manager to persist it, Doctrine will automatically do the complex work for you. From the developper point of view, Events and BlogPosts are just entities like any other.

It’s easy to do operations on items which you don’t know exactly the type:

But, if you know the type of item you are using you still can use them as regular entities:

Conclusion

As you can see above using table inheritance with Symfony and Doctrine is very easy. It’s just a matter of creating the parent class and the correct mapping. Furthermore you can switch from single to multiple table inheritance by modifying one line of code.

This technique should be used whenever you need to store items with a common state but that are very different in their nature.

The making of plaene.uzh.ch

Starting with a lot of information

Last autumn, we were asked to build a new version of plaene.uzh.ch

Alexandra Blankenhorn and her team from University Zurich (UZH) had already done a great deal of work and offered a detailed concept of the information they wanted to provide for every building. And they had a lot of information. Lists of all institutes, lecture rooms, museums, libraries, canteens and computer workstations for every building.

The application should be based on a map with all the buildings on it and some more information on the surroundings of every building. They didn’t just provide us with a lot of data, but also with a lot of freedom and trust to create a good and simple solution for the task.

Designing for a small screen

These long lists of buildings and lecture rooms for every building were the major challenge when Zahida started with the first scribbles. She designed mobile first and started with the most difficult situation. A lot of data on a small screen.

To solve the problem, all additional information was banned from the map, since it’s purpose is to find a building and see how you can get there. Information about rooms and services inside a building open in a separate layer if you tap on the building name. Loïc called it “Inspector” and the name stuck. The navigation inside the Inspector is smartphone-oriented (with tappable lists and horizontal movement) and we kept this behavior also for tablet and desktop. Designing mobile first helped us to focus on what’s essential and also create easier-to-use concepts for the classic website.

Inspector Scripbles

Wireframes for all the different cases

To create the basic grid and functionality of the website, Zahida designed Wireframes for Desktop and for Tablet and Smartphone in both orientations.

Wireframes Inspector

Ala came up with very beautiful designs — as always! And, they even built a clickable prototype.

Developers are the first test-users

As soon as they started developing, Reto, Colin, Sébi and Marco came back with a lot of feedback. They were the first ones to actually use the application and therefore the first ones to notice, when something didn’t work out as we planned. We continually adapted the wireframes and provided solutions for emerging problems like “there are too many building markers on the map, they are overlapping” or “some institutes are distributed on different buildings, how do we solve that in the inspector?”. We met with the UZH project team at least once a week to discuss open issues and decided which solution to go on with. The best way for us was to define only the basic functionality of the site, but then stay flexible to adapt the details in collaboration with the team representing the stakeholders.

Choice of the map service

The plan was to display all the buildings with clickable “markers” on an interactive map. The first thing that comes to mind when talking about maps is Google Maps. But because of privacy concerns about the way Google tracks user requests via cookie we also looked for other options. In the end we chose to use the open data solution provided by OpenStreetMap.

OpenStreetMap it is, now what?

Looking at the project from a technical point of view we were facing some challenges. First of all the decision to host the site at the university network has been made. Also the map tiles should have a customized style. That means we couldn’t just access the tiles from the OpenStreetMap server. We had to setup our own server and serve customized tiles from there.

Setup the server

To be able to render our own map tiles, we installed Mapnik2[1] with all of its dependencies. Mapnik2 uses PostgreSQL as a database, so we decided to also use PostgreSQL for the project itself. We downloaded the map data from one of the several OpenStreetMap servers[2]. With all that stuff setup we could use the python scripts Mapnik2 provides to generate the tiles according to an xml which specifies how the tiles should look like. Depending on the bounding box of the area you are rendering the tiles for it can take quite a while. 

Openlayers and HTML markers

We have not found any possibility to display markers containing HTML by default, so we have created two classes which enabled us to have HTML-markers. They are called HtmlBox and HtmlMarker. Click here to check it out, there is also an example HTML file provided in the archive, in case you would like to use it too. There is a little catch with this though. You will need to edit two lines in the “lib/OpenLayers/Layer/Markers.js” respectively the OpenLayers.js file. We have opened a pull-request[3] for that matter, but sadly it has not been merged yet. There you can see what needs to be changed to get our classes to work flawlessly. 

Small and big screens

Since the website should be looking good on small devices too, responsive design was the way to go. We used media queries to apply different CSS styles based on the size of the viewport. That is a very easy but powerful way to make a site viewable on differently sized devices. 

And now it gets hollywoodesqe

Everybody involved in this project was very dedicated, committed and constructive. A special “Thank you” for the awesome collaboration goes out to UZH-Project-Team, to Ala and to Steve, who facilitated our regular project retrospectives and further improved our work together.

[1] http://mapnik.org/

[2] http://wiki.openstreetmap.org/wiki/Planet.osm#Country_and_area_extracts

[3] https://github.com/openlayers/openlayers/pull/53