Bridging the gap: An experimental tool for redlining your animations

Bridging the gap: An experimental tool for redlining your animationsCameron CalderBlockedUnblockFollowFollowingFeb 5Co-written by Peter Vachon and Cameron CalderMotion is an integral part of modern web and mobile applications.

Designing these interactions is an art, and the actual development of these movements requires a particular skill as well.

Unfortunately, there is no efficient way to communicate the idiosyncrasies of motion between the designer and the developer.

The outcome of this breakdown in the process is often a gap presented between designed animation intentions and implementation interpretation.

If your team is co-located and able to sit side by side as designer and developer, then you have some luxury to collaborate and knock out an animation.

That process can be quite satisfying and sometimes lead to stronger comradery.

But even this method can take time to translate and finesse.

For teams that aren’t co-located or in different time zones, this may not be a practical option.

Besides red-lining motion specifications, which is extremely time-consuming for the designer, there is no effective way to provide the right information to a developer besides the designer coding it themselves.

We weren’t immune to this breakdown in the process, so we set off to find a solution.

Our first step was to audit existing options that would suit our needs.

Most options we found primarily enabled designers to create rich interactive prototypes, but fell short with a method to output animation specs or deliver code.

The options that could result with coded animations either aimed at design/dev hybrid roles or had steep learning curves.

The optimal process we sought resembled something closer to Zeplin where the designer could remain tool agnostic and easily deliver animation specs.

After all, a new tool to create the animation wasn’t where we identified the issue, it was the interpretation of how the animation was created which was the problem.

Lottie is the closest option we identified since it allows you to use an existing tool (After Effects) as your method of creating the animation and translate it to something of value for a developer (a JSON that can be used in iOS/Android/Web/React).

It’s a fantastic tool, however, part of the process we were interested in solving involved the translation of UI transitions and component-based animations.

Introducing Project CueOur goalCreate a web-based tool for designers to deliver animation specs and usable markup, resulting in more accurate implementation in half the time.

What’s in it for designers?Cue allows a designer to create animations within a platform they may already be familiar with.

We chose After Effects as the first platform to focus on because of the capability to extract the underlying keyframe values using the amazing Bodymovin plugin.

A designer can then upload the JSON, exported from Bodymovin, to Cue.

This enables the developer to view the animation, translated to code.

What’s in it for developers?Depending on the type of animation being created in After Effects the conversion engine will output the appropriate markup.

In the case of animating basic solids & shapes, the motion attributes will be converted to CSS keyframe animations and layers will be output as <div>’s with associated classnames.

When creating more complex animations, such as using masks or custom shapes, the engine will convert them to SVG’s and use SMIL specifications to assign the animation properties.

Why CSS or SVG?.We identified two use cases for each type of output.

If the motion designer was creating a simple indeterminate loader then outputting that animation as HTML & CSS works.

However, depending on the developer's implementation, an SVG animation would be inclusive and not rely on the developer adding the code in multiple places.

Developers can simply embed the SVG in the areas where it is needed.

For the case of HTML & CSS, let’s look at something like a footer alert.

When the conversion engine recognizes an element has a width equal to the size of the stage, it applies a width of 100%.

Similarly, when an element's position is anchored to the bottom of the stage, a position of bottom: 0 is applied rather than a value of top: 700px, for example.

This allows us to create layout and page level animations a little more accurately, not perfect, but you get the idea.

While right now the conversion to code isn’t exactly what a developer may write for proper implementation, it’s enough to present to users for feedback.

So many ideas and so little timeWhat does the future hold?.We had 4 key areas we would focus on.

Further support After Effects capabilities such as pre-comps, shapes, and text.

The data is available in the Bodymovin JSON export but needs to be parsed and converted appropriately.

Modularize the parser and converter.

By doing this we could eventually write JSON export plugins for other prototyping tools such as Flinto, Principle, and more.

Provide more conversion options for developers.

Extending the UI to be able to choose between CSS or SVG animations based on the developers preferred implementation method.

As well as further elaborating on the relative positioning or nesting object algorithm.

Advanced management and permissions for rapid collaboration.

Currently, Cue supports individual user accounts through Google authentication and allows users to start projects and manage individual motion files within them.

An early concept for sharing animations was developed but still needs some thinking.

Our takeawayWe identified a problem, conducted research, and bootstrapped ways to potentially solve it.

Without any requirements from stakeholders, and only pressure from ourselves, the experience proved to be challenging and fun.

In the end, we came out with a demo that started to address that user need.

You can experiment with Project Cue here or check it out on Github.

We’d love for you to give it a try.

Let us know your successes and where it could be improved!.If you’re so inclined, we would also encourage you to build it further.

We’d love to see this become a community-based project.

Peter Vachon and Cameron Calder are product team Design Leads at IBM Studios in Austin.

The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.

.. More details

Leave a Reply