Compressing a transparent PNG like a JPEG with rokka got even easier

We took the technique of “simulating” alpha channels in JPEGs with SVG one step further and made it even easier to use with rokka. Now you just have to set the jpg.transparency.autoformat stack option to true and rokka will return the most appropriate format, if the rendered image has a visible alpha channel. No need to build the SVG on the client side anymore.

Compared to the approach described last week, we had to include the actual binary images with a data-uri attribute. Most browsers don’t load remote resources defined in a “remote” svg. This will make the SVG response quite a bit bigger than the binary formats, but gzip compression helps here enough.

Another obstacle was detection, if the client actually supports SVG. Firefox and Safari for example just send “*/*” in their accept header, when requesting an img in HTML. But consulting Can I use? revealed that browsers not supporting that are very much negligible. Therefore, rokka will now return a SVG, as long as there is “*/*” in the accept header (or image/svg+xml is explicitely mentioned), when this stack option is set. Unless the browser states that it knows how to handle WebP, then we return that (since that’s still smaller than the SVG approach). If no accept header is set (or no */* is in it), we return the whole thing as PNG.

All this only applies if you request a jpg, set the right stack option and the rendered image has a visible alpha channel. In all other cases, there’s no magic applied and just the stated format is returned (meaning for example, it’s delivered as normal jpg, if there’s no visible alpha channel).

Below we load the image at https://liip.rokka.io/dynamic/resize-width-500-height-232–options-jpg.transparency.autoformat-true/7ed6427d9edaaaa60bf21f503022d56a208962aa.jpg. It should render as WebP (Size: 17.8 kB) in Chrome and as SVG in all other browsers (Size: 20.5 kB). Or as PNG (Size: 24 kB), if you download it eg. with curl without any accept header.

Tags:

How to compress a transparent PNG like a JPEG with rokka

Inspired by this blog post by Peter Hrynkow, we implemented some new features to rokka to provide a solution to his question “Wouldn’t it be great if you could get the compression of a JPEG and keep the transparency of a PNG?”. There’s a new operation now in rokka, which allows you to extract just the alpha channel as a mask.

Let’s take a picture. A PNG with an alpha channel surrounding the screen. It’s 24 kB large (due to the use of pngquant and zopflipng on rokka here, it’s smaller than usual, without those it would be 95 kB). The green pattern is the background and not part of the image.

This is just the JPEG of it, but it lost transparency in the process. Just white, no green pattern, not the desired result. But it’s only 12 kB in size.

(as a side note, with lossy WebP, which supports transparency, the size would be in the same area as the JPEG one, just with transparency included, but that’s not the topic here, since only chrome supports WebP)

Now with the solution of Peter, we need a PNG with just a mask for where it should be transparent. rokka can create that automatically for you with the same source image uploaded before, no additional work needed. The mask PNG has a size of 5 kB

Alpha mask PNG

You then stitch them together with an SVG snippet (more details, see in the blog post mentioned above) and embed that in your page.

And the result looks like this:

The mask PNG and the JPEG together have a size of 17 kB, smaller than the initial 24 kB of the original PNG (or 95 kB unoptimized). For bigger pictures, the gain would be significantly more. And all you had to do is to upload one image with the correct alpha channels, create initially two different rokka stacks and an svg snippet. The conversion then is done automatically by rokka.

A little remark. As you can see, there’s a little shadow below the screen in the alpha channel. This one is hardly noticable on the SVG version, due to more or less “double” transparency. We couldn’t come up with a quick solution for this problem, but are happy to look into it, if anyone is depending on that. In short, this approach produces best results, if you only have 1-bit transparancy. For other cases, there may be small differences in the output.

Update: See our new post about making this even easier when using rokka.

Tags:

Let’s talk about gender diversity, it’s not a taboo

When I was a little girl, I was more into things than people. I did not like barbies or dolls, but I was fascinated with barbie’s dog because it was battery-driven. I loved how it worked and moved around, and it had things you could stick to its tongue, like a bone or a newspaper. And it was cute.

Now, this led to me being excluded from most “hangouts” with girls as I grew up, as they mostly got interested in things like barbies, then later on boys and make-up. I wanted to pay more attention to crafts, lego and computers. My parents never told me I should spend my time on anything specifically, they simply supported me in developing naturally.

I don’t know what all of these girls chose as their careers as they grew up, as I did not need nor want to keep in touch with them. The few I know of are not in tech.

As a gadget nerd, I wear an Apple Watch. Among other things, it tracks my heartbeat. When I read the research that said that women are typically more interested in people than men, my heartbeat went up quite a lot. I became angry and upset and wanted to scream BULLSHIT. My thoughts went along these lines; “I am not more interested in people than things, so the research is false and obviously some stupid fu**”…

I stopped myself there. When I start swearing in my thoughts, I know that something is not right.

Continue reading about Let’s talk about gender diversity, it’s not a taboo

Headless B2B marketplace?

The eCommerce world is undergoing a major new evolution step, moving away from tightly coupled monoliths towards modular monoliths. I know the word monolith has a lot of bad connotations in the IT world but I think this image is a bit of a knee jerk reaction. In most cases a monoliths is the right starting point for a project. The key thing is choosing a monolith that allows for evolution as the project scope or needs expand or change. This is something that eCommerce solutions of the past tended to fail at, hence the bad image for monoliths. Magento v1 is an awesome eCommerce platform as long as the needs for customization fit within the bounds hardcoded into the code base. However simply replacing Magento v1 by a micro service architecture would maybe prevent any hardcoded walls but would bring productivity to a halt. Martin Fowler states “don’t even consider microservices unless you have a system that’s too complex to manage as a monolith”. Or the corollary to this: Microservices must help to simply your system.

With this precursor out of the way, lets look at some of the emerging players in the eCommerce space and since I am currently evaluating options for a concrete project lets look at them with the requirement working as a headless B2B marketplace. But let me briefly explain those requirements

Headless

The customer is aiming to provide multiple touch points. As such there will be a web interface for desktop class monitors which will get the initial focus. But there will be potentially additional desktop optimized UIs but more importantly various mobile apps (maybe one focusing on purchasing, another focused on logistics etc) down the line. As such the eCommerce platform needs to make it possible to easily get to the data and business logic via some sort of API.

B2B

Business to business generally changes requirements quite a bit from changing the focus from presentation to efficiency. On a B2B website keyboard short cuts will have higher relevance than on a B2C site for example. Searching by SKUs or ability to scan barcodes are also more relevant. So are topics like permissions management to define who can fill shopping carts and who can complete purchases. Multiple shopping carts, shopping lists, repeat orders etc are also classic B2B requirements. Topic like up-selling are also relevant but might need a different spin due to the separation of people choosing products and people completing purchases. Also pricing is very different from B2C. Starting from using non VAT prices to customer and quantity specific rules.

Marketplace

On a marketplace there are more than one merchant selling products. Usually any product can be sold by multiple merchants each with their own price and stock. Meta data however tends to be normalized across all merchants creating additional challenges in how to import that data and ensuring high consistent quality. Shipping costs and even more importantly handling of returns also become considerably more complex.

So lets look at some of the contenders ..

Magento v2

I should rather say v2.2, since previous versions were plagued by bugs and releases contained a lot of backward compatibility issues. The new release promises to finally stabilize the APIs and a good sign here is that finally Magento is more actively integrating the feedback of the community into the release and development process. My fellow Liiper Maxim is quite happy with how things are going there. I am not entirely sure why they build so much infrastructure code rather than just using ZF2+ or Symfony but over all they implemented similar patterns which enable easier refactoring and extensibility. Magento is also taking B2B more and more seriously but out of the box the feature set still needs a bit work. Overall Magento v2 is a big step forward from v1 however its also clear that the code was refactored not entirely reinvented. The big advantage of Magento is of course still its huge community. While the code is open source, for bigger projects we would recommend using their Enterprise offering. Unfortunately they do not seem to provide a good overview of the differences.

Spryker

Spryker is kind of a hype at the moment. I say hype not in the negative sense here, they simply managed to get a lot of attention very quickly. Quite impressive! One of the things that makes Spryker unique is that they denormalize relational data into Redis, which enables high performance without a reverse proxy. That being said, reverse proxies and invalidation are challenges but they are well understood. Also when working with international customers, a reverse proxy in different regions around the world might be necessary anyway. Keeping the data in Redis sufficiently in sync provides a challenge that is unique to Spryker. On the siroop.ch project, who are using an older version of Spryker, I saw this was a constant source for issues. They are rewriting this part right now so hopefully this will be less painful in the future. They also told me that they will improve the management of the data contract between the system writing to Redis (Zed) and reading from Redis (Yves) more explict. Another aspect which sets Spryker apart is that their backend is designed to enable separating f.e. user management and ordered management onto different servers if necessary. This architecture could lend itself to flowing more naturally into a micro service architecture. They are planning to add B2B features and even marketplace functionality to the core, but these features will only start appearing in the coming months. In the same way while Zed and Yves talk via HTTP, Yves itself isn’t made for headless setups out of the box. Here again they are planning to soon over a solution. They are hopeful that the first such features will start landing this year already. Their licensing is a bit unique in that one pays for developer seats.

My main criticism of Spryker is that due to being based on Silex, rather than on Symfony full stack, they are lacking a lot of developer productivity tools. For example to solve performance issues, they have multiple independent Pimple, ie. DI containers, for each module. This in turn means its impossible to get a global overview of available services. For the routing they could in theory provide something like this fairly easily but have not emphasized this.

Sylius

I followed this project from its very beginning when Pawel started appearing on the #symfony IRC channel. His attention to quality impressed me right away and really shows in Sylius starting with the use of BDD which is also a great way to document behavior via test code. While initially this perfectionism prevented him building a community, the project has long overcome this and is now quite an active community. On top of this there is now a company dedicated to Sylius support and training. The code is entirely open source, no enterprise versions or other licensing fees. They are now very close to that very first 1.0 release. However releases for a while have come with quite detailed upgrade instructions whenever backwards compatibility breaks had to be done. In terms of modularity I think they are on par with Spryker. Given that they are build on Symfony fullstack framework, its also easy to leverage and integrate with the countless Bundles out there. Their large set of re-useable components are also ready to be used even outside of Sylius itself. I am also quite excited about their attention to single page apps (SPA).

OroCommerce

The Oro team, given that they are essentially a lof of ex-Magento core developers, is certainly quite experienced in eCommerce. Their first product was OroPlatform and OroCRM, which are both interested topics for customers that want to highly customize their CRM or that simply want to build desktop-style custom applications. OroCommerce is a bit younger but they emphasized B2B very early on. I suspect because they sensed that Magento v2 didn’t initially put much of a focus on B2B. Similar to Sylius they are based on Symfony fullstack. In terms of licensing they offer a free open source basic version but the Enterprise version seems necessary for larger projects. I feel like they have not yet really managed to build a large community.

DrupalCommerce v2

While Drupal 8 certainly improved its core towards easier and cleaner extensibility by adopting interfaces, I would still argue that form the lists in this post, its still the one that is most like the monoliths of the old days. However DrupalCommerce still manages to be remarkably extensible. We build Freitag.ch on Drupal 7 and Commerce v1 which is very unique in that most products on the store are unique. Their next version, which builds on Drupal 8, looks to further improve here. They basically started by first building re-useable components which they then integrated into Drupal to build version 2 which just reached RC1 status. Where DrupalCommerce has always shined, which is the reason why we picked it for Freitag.ch, was the fact that that through Drupal it provides a full featured CMS where the commerce parts are first class citizens. This means that advanced story telling is possible, entirely driven by content authors, rather than requiring a development sprint for each bigger change. As such, story telling does not tend to be a key requirement for B2B for now but I think it will become more relevant. Those companies that have been holding out with adoption digital B2B mostly did so exactly because they felt that with the approach most companies are taking, the inspiration part of the commerce experience was lacking. The code is fully open source and consulting offerings are available

What it all means ..?

Quite honestly I am very impressed with this new generation of eCommerce solutions. All of them have different strengths and weaknesses.

Spryker has a unique architecture that sets it apart from the rest, which however also means in terms of minimal setup it has the most moving parts. Sylius, OroCommerce and DrupalCommerce all are based on a larger ecosystem which will allow integration of other features and even applications quite seamlessly.

Sylius and DrupalCommerce are fully open source, while Magento and OroCommerce provide Enterprise offerings which are really sort of required for serious work. Spryker again goes a different direction by requiring developer licenses. They all have companies behind them that offer commercial support when needed. In terms of community Magento seems to have the lead and OroCommerce seems to be struggling here a bit.

Sylius is the only option that from what I am aware have stated to have headless/SPA as a focus. All of them currently provide the key APIs necessary with Spryker lagging behind a bit since Yves out of the box does not provide any API yet.

What all of the projects on this list have in common: They are still in a fairly young state of development either only gone stable in the past year or two .. or even just approaching the first stable release. The good news is of course this means that there will still many years of support for those versions and the ecosystems will continue to grow. However in terms of getting things done quickly for more standard eCommerce requirements, Magento v1 probably still beats them all.

Overview of open source API gateways

HTTP based APIs have long established themselves as a successful pattern for organizations. Increasingly these APIs are made available to the public or at least are leveraged more and more by disconnected development teams within organizations. Where the first APIs just drove the live search on a website, APIs these days provide extensive functionality for internal and external development. As such there is a need to centralize access to documentation, authentication and permissions so that users can easily discover and leverage those APIs in a way that prevents negative effects for other users.

As often when new patterns emerge, a new type of software solution emerges, in this case API gateways. Given our affinity towards open source here at Liip, we have studied the market a bit and want to present a very high level overview. We would very much appreciate additional first hand experience feedback in the comments below!

3scale

3scale was bought by RedHat in 2016 and subsequently open sourced at github.com/3scale. We have not tried to set it up ourselves but from past experience, previously proprietary software can tend to be tricky to get running. The product covers all the key pieces: API management, rate limiting, access control, analytics. There is a hosted option starting at $750 per month with 500k API calls per day and some other limitations.

WSO2

Originally created at IBM, Wso2 has a close affinity to the Apache community. It can be self-hosted but the company behind this project also offers a hosted cloud solution. Setup for a quick proof of concept was simple and we had a proxy running within 10mins. We found the UI a bit complicated and limiting and ran into some errors when we tried to save our definitions.

Kong

Mashape build Kong on top of Nginx, which is the web server of choice for most of our projects these days. They originally required Cassandra for config management but since version 0.8 they also support PostgreSQL. The fact that they are not yet 1.0 makes me a bit nervous but we didn’t find much complaints about backwards compatibility issues from a web search. Anyone have some practical experience to share here? Mashape of course also offer a hosted enterprise version but no word on pricing on their website. They do not seem to offer an admin GUI as part of their open source offering but there are quite a number of open source options available. There are quite a lot of available plugins and writing your own in Lua isn’t too hard.

Tyk

Another option that makes it easy to run locally or in the cloud is Tyk. The dashboard requires a commercial license but for on-premise its free for a one node setup. The hybrid setup is an interesting option as it allows you to keep the API calls in your datacenter while leaving the dashboard and management to the cloud.  The current version assumes that the backend API is secured by IP whitelisting but they are looking to improve here. Setup was very easy and we were up and running within minutes. Tyk seems to focus on simplicity which is both good and bad.

We gained in depth first hand experience with Tyk by setting up the opentransportdata.swiss API gateway last year.

Update: We originally incorrectly claimed that the on-premise did not provide a dashboard. For on-premise a dashboard license can be bought, its free for 1 node setups. I remove pricing information since there are simply too many options to choose from given that they provide on-premise, cloud and hybrid. The good news is that for cloud there is a free tier to start with for upto 50k calls per day.

Take away thoughts

In general there seems to be quite a lot of solid choices for open source API gateways. They all check most of the boxes. They also all provide some sort of commercial/hosted option. So in the end it seems that the devil is in the details and as such from the point of view of an agency, it makes sense to standardize as much as possible. Given that we are running a large project already on Tyk, it makes sense for Liip and our customers to lean towards Tyk.

How to start an inno project and build commitment your team?

You have a vision, you gathered a team and you even have a budget. And now, how do you get your team started? List your team’s expectations, build a common understanding, and let your team take on responsibility. You also have to come to terms with the fact that the project involves uncertainties.

We have the ambition to create a tool that provides micro-learning to train cognitive biaises. Today we have a prototype. Last spring, we had only a vision to lead us. As told in a previous post, one of my colleague detected a need in an industry and an opportunity for us to create a new tool. He gathered a small team and invited us for a kickoff meeting. We were all motivated. How could we proceed?

During the kickoff, we jolted ideas around, and used sticky notes to draw the project. It was important that we all had a common understanding of the tool we wanted to create. This kickoff meeting was also the moment when we created a team spirit and built personal commitment.

Ownership, responsibility and role

As motivated as I was to play my part, I needed to understand how I could contribute to the project and how much time it would involve. We started by writing down the outputs we expected from the meeting. The expectations were various.

Our expectations for the kickoff meeting

Kevin expected us to take ownership. The initial idea came to him and he wrote a paper about it. He expected us to work as a team and take ownership. This is what he means by ‘Co-sign Whitepaper’.

To me ownership meant responsibility. The moment I commit to a project means that I stop saying ‘Kevin’s idea’ or ‘Kevin decided’ or ‘Kevin meant’. I start saying ‘we think’, ‘we decided’. It also means that I committed myself to play my part, make time to work on the project.

I needed to understand, the role that I would play, in other words how, with my competences I would contribute to the project. This is expressed as ‘Where do I position myself?’ From the beginning we are a multidisciplinary team. We have learnt to contribute with our respective skills. Understanding my role leads to better planning. If I understand my tasks and how I relate to the other team members, I can organise my agenda and be available when I am needed.

During this meeting we also decided how we would communicate about the project to our stakeholders’, which at this point, were internal. We finally defined the next steps and decided the content of the next workshop.

Map the idea – understanding with drawing

We were sitting down, listening to Kevin. Sitting around a table is so limiting! Ideas cannot express themselves, they keep eluding and the energy slowly runs low. We couldn’t see what Kevin was explaining. After a moment of deep concentration, I tend to relax a bit, which means that I am not being this concentrated. At some point, we were all running low on energy. Thus we started drawing.

White walls are a blessing. Someone starts drawing and you can add up your idea, then everyone can see and add his/hers.
It started with a sketch, and step by step it became like a map. A map of the idea, where we could navigate, see the stakeholders, start apprehending who we needed to talk to, what we needed to understand, what remains unclear, what is our role, our strengths and weaknesses.

Let your team take ownership by drawing together the idea.

It very much looks like this: drawing, talking and gesturing. When you stand, the flow of ideas wraps you up and before you realize it, you are ‘in it’, you take ownership and you belong. You stand and draw together. It has nothing to do with sitting and looking at someone talking, you are part of it.

Drawing of our project

Our drawing got more complex while our understanding of the situation got clearer.

Be kind to your blue side and deal with uncertainties

Have you ever heard of the DISC assessment? That test attributes colors to people after a test. I never took it myself, but I often heard some friends refer jokingly to it. When they refer to the ‘blue colleague’, they talk about his preciseness, attention to detail and his capacity to be systematic. As I started this project, I realised that part of me, that I will call my ‘blue side’ backed off, because it was unconvinced. My blue part tends to refrain the overly enthusiastic and risky part (I don’t know the color of this side yet ;-)

In other words, during this meeting, my blue side realized that there is a huge part of unknown in this project. When you start an innovation project, you have to be aware of the fact that some uncertainty and risk will always be present. During my studies and work life, I have been trained to try to avoid mistakes and evaluate risk. I usually try to have a fairly good idea of the success I expect  from my actions before I perform them. Starting an innovation process is the contrary of this. It is jumping in the unknown and imagining something that does not exist… yet. You need to be open-minded and accept the risk and unknown.

To conclude: we mapped the project and I accepted the probability to fail

It was time for me to accept that mistakes are part of the game and to come to terms with the probability of failing. An innovation process is made of ups and downs, test, success, mistakes and iteration. The risk is part of the game.

During this first meeting, we mapped the project and the stakeholders It gave us the necessary common grounds to start working together. To draw the project allowed us to clearly see the expertise we needed. We planned the next steps and organized the first workshop where we would invite other experts. The project had officially started.

Tags: , , , , ,

A game jam at Liip: Ludum Dare 39

Recently we hosted a game jam called Ludum Dare in the Arena of our Zürich office. It’s important to us to be a part of the tech community, and there’s a growing scene of indie game developers in Zürich.

What is a game jam? It’s a challenge to create a video game from scratch in a short amount of time. There are a lot of different ones being run; for Ludum Dare you and your team have 72 hours to make and submit your game. Although that may sound impossible, game jams are popular exactly because they force you to be creative instead of dithering about the details of what you want to make.

Ludum Dare

Ludum Dare has been running for fifteen years now, and this was the 39th edition. Thousands of people across the world participated, all creating games on the same theme—which was not announced until the start of the jam. You can always participate at home, but getting together with other jammers is much more fun. It also lets you meet new people and form new teams. That’s very necessary, because making a game requires so many different skills.

In Zürich, the local game developers’ group Gamespace organises meetups for Ludum Dare, and this was the second time Liip has hosted them. It’s much easier to jam if you have a big space where you’re not disturbing anyone by spreading out electronics and making weird sounds.

The Jam

We started on Saturday morning with croissants and orange juice and discussed the theme: Running out of power. A good jam theme should have lots of different possible interpretations, and our group discussed running out of computing or graphics power, the Spoon Theory, losing political power, losing magical powers, or having to constantly charge your mobile phone in the game. In the end we split into two groups. One decided to make a story game about coping with depression, and the other started on a platformer about a magical creature giving up their powers to become more human.

The groups got down to business and began writing code and using graphics tablets to make the artwork. Both games were programmed using the Unity engine, a popular choice because of its broad feature set and visual editor.

For the game Dryad, which I worked on with David Stark, we wanted to come up with all our sound effects from scratch. This meant repurposing whatever office supplies we could find in unexpected ways! The sound of sticky tape being pulled off the roll became the sound of a magical spell. Riffling a block of post-its, we got the sound of a crossbow firing a bolt. The noise of triumph when you reach the end of a level comes from a table football trophy being struck!

The Results

By the end of Sunday night, our games were mostly complete and only needed the finishing touches to be submitted on Monday. Both of them are available to play online: Dryad and 03:00 AM. We’ll discuss the creation process at a future Gamespace meetup. In the meantime, the games from the Ludum Dare 38 jam (also held at Liip Zürich) are available here:

Tags: , , , ,

Houston: a mobile app to replace a radio communication system

Bring your company radio system to the 21st century using VoIP and mobile applications to improve communication quality while reducing costs.

With the project Houston, we took the challenge of replacing the old radio network of the Transports Publics Fribourgeois (TPF), a swiss public transportation company by a system using existing data network and running on mobile applications. This solution solved the problem of maintaining a dedicated radio network. It also improved both the global quality of the communication and the availability of the system.

Initial situation: communication based on radio system

Since decades, employees of the Transports Publics Fribourgeois (TPF) have been using standard radio to communicate between them. The radio system is meant to cover the needs of the users. It is spread over more than 200 busses, 30 team leaders and the operation center). There are three types of users, with specific needs:

  • The operators – working in the operation center – use the radio to speak to a specific bus driver, or to broadcast messages to all or part of the running busses.
  • The team leaders are dispatched at different locations. They use the radio to manage daily events – such as the replacement of a driver – or to inform many drivers of a change in the network – for example in case of an accident.
  • The bus drivers use the bus radio as the main means of communication while driving. They can call other busses, the team leaders or the operation center.

 

Logo TPF

Continue reading about Houston: a mobile app to replace a radio communication system

Tags: , , , ,

Compare and convert HEIF, JPEG and WebP images with rokka

TL;DR

Go to https://compare.rokka.io/_compare/ and compare the output of the HEIF, JPEG and WebP formats. Even upload your own pictures. All done with rokka.

Long version

Apple produced quite some hype with their support for the HEIF image format in the upcoming iOS 11 and macOS High Sierra. HEIF (High Efficiency Image File Format) is a new image file format, which supports many use cases and uses HEVC for the compression part, also known as H.265. Apple is using HEIF on their latest devices as a replacement for storing pictures and claims up to 50% saves on storage.

Even though no browser does support HEIF yet, also not Safari in the current betas, we nevertheless thought it would be cool to add HEIF support to rokka – our image storage and delivery service. And so we did.

Unfortunately there’s no out-of-the-box solution to create HEIF files currently. But Ben Gotow‘s site jpgtoheif.com inspired us. He published instructions how to create HEIF files with the help of ffmpeg, x265 and Nokia’s heif writerapp. But due to the uncommercial-only license of that Nokia code, we use GPAC to create the HEIF container, which is published under the LGPL license.

Looking at and comparing HEIF compressed images

What’s the fun, when almost no one can look at the result? So we built a little site, where you can compare the output of rokka’s HEIF, JPEG and WebP (the later is only supported on Chrome) and even upload your own pictures. Just head to

https://compare.rokka.io/_compare/

and enjoy it. The uploaded images will be deleted from time to time.

The site uses Nokia’s HEIF Reader JavaScript Implementation, which decodes a HEIF image in JavaScript to a canvas element. This way, everyone can look at HEIF images and compare them to JPEG and WebP output.

The site also allows you to play with different quality settings. All formats support a setting from 1 to 100. 1 is the lowest and 100 the highest (also means lossless for WebP). The different quality settings for the different formats don’t really correspondent to each other. Just play around with them and compare the sizes of the images with different settings.

We use pretty much the default settings of ffmpeg, maybe some stuff could be improved on that side. And we also don’t know what kind of encoder Apple is using for generating HEIF images. So don’t really compare the compression we produce for HEIF images with what maybe other encoders can do.

Also be aware, that we asynchronously compress JPEG images in the background with mozjpeg (see the rokka docs for details), so the first render output is not the maximized compression we can get for JPEG images. Just hit the render button 10 seconds later to get the final compression (the site will inform you, when it’s not done yet with that compression step).

Tags: , , ,

Hackathon on WebVR using A-Frame

What do you do with a HTC Vive and a few cardboards? Hack using A-Frame! We held a hackday to start creating virtual reality environment in Web browsers.

All participants from the hackaton

With the support of Michael Kohler, community organiser at Mozilla CH, we organised a hackathon in our Lausanne office. Food and beverage were provided with our great terrasse to enjoy.
Some devices were available. First Google Cardboard  to test our work and a HTC Vive to create an even better experience.
A Hololens was also available to test augmented reality. It was not the subject of the day, but it is always interesting to compare those two worlds.

Continue reading about Hackathon on WebVR using A-Frame

Tags: , , , , , ,