Skip to main content

Innovation is About the Journey: The Mulesoft Design Services Story

Federico Ariel Piagentini
Aug 26 - 6 min read

Meet Mulesoft Design Services

At Mulesoft, a critical part of integrating everything is building the Mule Applications that make those integrations. For such, Mulesoft has many different options for our customers to use, including ComposerFlow Designer, and Studio, among others. All of those need a Mule Runtime in the back to help the customer quickly build their integrations through Design Operations. Design Operations are basically any actions that make the user’s life easier: we test their connections to see if they are still valid; we autocomplete anything that we can; we inspect all the attributes the third-party accepts and their types; and we allow for a smooth experience in the UI by validating each field. Does the user want to see what kind of data will be going through a step of the integration? No problem! Design Operations have their back. Not only that, once they are done drafting their integration, they can quickly test it and see in detail what happened at each step just like with any debugging tool for any high-level language.

So now that we understand Design Operations, we’re going to be taking a closer look at Mulesoft Composer, which extends the integration-building powers of Mulesoft’s AnyPoint Platform to non-developers. It provides a no-code, point-and-click environment experience, coupled with enterprise-grade reliability, security, and governance. Behind the scenes, it’s powered by Mulesoft Design Services (MDS). MDS is the first serverless architecture built within Mulesoft that helps customers build their integrations by using the full stack of the Mule Runtime tools.

MDS is what delivers all of the Design Operations, spelled out above, for users of Composer. And everything happens in near real-time, so the customer doesn’t need to know anything about Mule Runtimes or Infrastructure while they’re building their integration. To make this work, MDS uses a Kubernetes farm of pre-warmed-up Mule Runtimes to handle multi-tenancy and security isolation with high concurrency and autoscaling. Read on to find out how we built it!

A Hack is Born

We’re not Super Architects. No one woke up one morning with a fully-formed vision for how to architect MDS, how to provide a serverless API, or how to achieve fast response times. Innovation here needed baby steps, iteration and lots of research.

It all started during a company-wide hackathon a few years ago, fueled by energy drinks and lots of snacks. The problem we wanted to solve was, “Mule Runtimes are slow to start and may fail during an integration, requiring manual restart (even in our cloud product Flow Designer). How can we improve the customer experience so they don’t need to care about those?”

During the hackathon, we succeeded in having a Mule Runtime execute a single Design Operation and then destroy itself. This was the first step toward an ephemeral Mule. The idea was simple: let’s break the coupling between a Mule application being designed and a Mule Runtime. Soon after, we started a Proof of Concept (POC) to find the best way to achieve this in production.

Proof of Concept Research

The POC began with research into all the serverless tools out there. We didn’t want to reinvent the wheel if we didn’t need to.

We first looked into using AWS Lambda to start a Mule Runtime, execute an operation as a lambda operation, and then delete the Runtime. However, we found this took too long. Additionally, we discovered that we had some scenarios that were incompatible, as we needed some “state” between certain operations (like clicking “Test” and having a Mule App deployed for a certain amount of time).

We also looked into AWS API GatewayGoogle App EngineMicrosoft Azure ServerlessIBM Apache OpenWhiskKubeless, and some other technologies that were aiming to solve the serverless paradigm through different strategies. With each POC, we would learn something and we would advance a bit more, but eventually we would hit a roadblock. It seemed our solution was not out there, and we would need to build something — but maybe not all the way from scratch.

The Solution Was In Our Own House

Along came Mulesoft Runtime Fabric (RTF), which was freshly released at about that time in our innovation journey. RTF offers a way to deploy Mule applications into a Kubernetes cluster. It includes know-how on deploying Mule Runtimes into Docker images, as well as how to connect them to the outside world using our RTF integrated ingress solution. It also includes configuration, secret management, and monitoring. RTF offered us enough of a head start to build on. The POC became super-focused on having MDS work on top of an RTF cluster.

On a Mission to Dreamforce

We started with a single service deployed inside the RTF Kubernetes cluster, and a single Design Operation to be covered: Test Connectivity. A Mule Runtime would already be deployed as a pod and we would just use it to solve the operation. We executed a request from Postman and the first scenario was a roaring success! That’s it? Did we make it? No, we were FAR from finished, but this was a good start. Now that we had the simplest scenario covered we built a roadmap and started coding in earnest.

What were our next steps? We needed to make sure we covered all possible Design Operations. Also, if we ever wanted to go to production, we needed to have high availability (HA) everywhere, which means we needed MDS to be able to scale. All of this led us to decide that we needed to start storing some distributed state — but where?

To avoid any incompatibilities between databases and our Kubernetes clusters, we wanted something living within the same MDS cluster. The Mule Runtime had been working for ages with Hazelcast, a clustered, NoSQL, in-memory database that works pretty well inside contexts like Kubernetes and that was already working for the Mules in RTF. Without hesitation, we put our POC hats on and gave it a try. It worked! With simple Java Collections, we had a scalable distributed state. We started building everything on top of Hazelcast.

So now that we had all of our tools in place, the mission became clear: make it to a staging environment in time to debut at Dreamforce, Salesforce’s annual customer conference.

Testing and Iterating

With hard work and a lot of focus, by Dreamforce, we had everything we needed: feature parity, a scalable product, everything deployed in a staging environment and…uh oh — NO SECURITY at all. Innovation is about the journey and this journey was still far from over. Luckily, the Dreamforce demo of the staging environment was a success, so we had the confidence and support from everyone to keep going on the MDS journey and make it to production.

Along came the security and architecture reviews, which helped us work on great features like Network Policies and a strategy to isolate customers and avoid multi-tenant Runtimes. After that, we prepared for production and realized we needed better performance. This led us to work on a full new set of features for warming up runtimes: pre-loading dependencies, exercising classloading in a dynamic way to make it future proof, and many others.

With performance testing, we discovered that our communication with Hazelcast was not concurrency-friendly, so we had to rewrite the interaction from scratch and then do even a bit more to make it performant.

We couldn’t have predicted any of these things without walking the path of innovation, testing and iterating frequently.

Where We Are Now

The journey took us to a point of becoming the backend of Flow Designer and onward to adding new features to work behind Composer, which takes us back to the introduction of this post. As you can see, innovation wasn’t a straight line, nor was there a singular “eureka!” moment. Rather, innovation proved to be the result of doing the work, documenting results, and iterating until we finally have a version of MDS that we’re proud of. The goal from the very beginning was to help our customers in a significant way, and we’ve achieved that with MDS, which is currently generally available.

In the end, innovation is about the journey, and this journey is still not over. Our options for the future are wide open as we work on new features regarding performance, high availability and scalability to provide a continuously better experience to a growing customer base. MDS could become a full-time Mule Serverless as a new standalone offering, it could be used by all integration tools in and out of Mulesoft, or it could be integrated with machine learning technologies to reach new unprecedented faster response times. Anything is on the table!

If you’re interested in innovating with us, join our Talent Portal to check out open roles and get periodic updates from our recruiting team!

Related Architecture Articles

View all