CTO Bet: Future of Splice Desktop Apps

The primary job of a CTO is to leverage technology to empower the business and the team. Sometimes that means making boring but safe choices, sometimes that means more risky or controversial choices. I am starting a new series to discuss some of the bets we are making at Splice and the thinking behind those choices.

Realizing that some of our company’s early assumptions around our user base might have been a bit off, we adopted a slightly unconventional technical approach to our desktop applications.

Splice is the advocate, companion of the modern musician. We developed a creation hub that connects the musician workstation with the cloud meaning that we sit close by the musician as she goes through her creation process. We opted early on for a transparent, discreet, independent integration meaning that we have an app running in the background handling various events and presenting a UI only when needed (similar to Dropbox). More complex experiences were designed to happen on the web requiring the user to have a browser open. Long story short, Splice is amazingly successful but modern musicians feel that the browser is often too far away from their creation space, and it’s a source of distraction. We have native apps written in Objective-C and C# (Mac and Windows). Keeping them in sync, implementing rich UIs and QA testing have been challenging. The quality of the user interactions and limited feature set have prevented our users from fully benefiting from what we have to offer. After some research, an investigation period and lots of discussions we opted to go with a unified stack:

This stack is the same on Windows and Mac. We currently still have some legacy native code but we are quickly phasing it out. More on that below.

TL;DR summary

We are moving away from native code and switching to Go + local web based views. Here is why:

  1. Simplify: Maintaining & keeping in sync 2 desktop code bases is seriously complicated, painful & expensive.
  2. Empower: Unifying the technical stack allows the entire team to contribute & innovate.
  3. Enable: the new stack allows us to more easily/faster build new features in a cross platform fashion, using cutting edge technology while still having access to native APIs.
  4. Acceptable overhead: while there is certainly a RAM overhead, it can and should be minimized.

This decision is slightly controversial because of two things, the choice of using Electron and the fact that we don’t stay in the JavaScript world and instead bring in Go.

Electron is controversial

Electron often attracts bad comments on Hacker News and it is associated with very large RAM usage. The Slack app being the usual cited example. The issue here is more complex than it looks. Electron is a wrapper around Chromium which is the heart of the Chrome browser. In other words, the application renders a web view as its main view. Web views aren’t by default as optimized as native GUI code and that often means a great RAM cost than purely native UIs. Besides that, Chromium itself loads its fair share of code in memory. The raw cost of an Electron window is around 40MBs vs ~4MBs for a native UI. Not great… but how much RAM do you have on your computer? Do you realize that Chrome allocates around the same amount of RAM when you open a new empty Chrome tab? That’s not an excuse for the memory overhead but it helped me put some context around the cost of the raw overhead.

The Slack app problem

People complaining about Electron RAM usage usually refer to apps like Slack using 1GB or more. This is often due to a couple issues: the architecture of the app and the way the frontend is coded.

During the investigation phase, I realized that it was best to avoid using node.js modules, especially native modules. Those were hard to work with (odd behaviors at times, hard to debug, hard to test) and often memory heavy. The second part is the UI cost, load a web app in your browser and look at the memory usage go through the roof as you load pics, gifs, sounds and lots of data. Some apps are better than others and VSCode is a good example of an Electron based app with a very reasonable memory footprint. That said it is very true that most frontend developers don’t benchmark or track the memory usage of their apps.

So yes, an app that has a browser based UI will require a bit more ram but it should be reasonable if we expect people to have decently modern machines. Also, this quote from my colleague after I complained about how much RAM Chromium uses, made me smile:

RAM is meant to be used, stop being cheap trying to keep all of it free - Loren to Matt

He has a good point, but of course you don’t want to swap either and you don’t want apps you aren’t actively using to squat most of your RAM.

Web frontend: empower the team, share knowledge

image

Chromium

So on the cons side we saw that there is a bit of a RAM usage increase but what about the advantages of using a web based GUI?

The main advantage was obviously only developing the user interface once and reuse it everywhere. But it goes further than that. This decision also empowers a bigger part of the team: all our frontend developers can become desktop application developers. They certainly need to understand the difference between web and local (technical and behaviors) but they don’t need to learn Cocoa or WPF. Also modern web APIs such as a Web MIDI and Web Bluetooth are available via a cross-platform API in Chromium if we decide to go that way. Adding new UI features is now much faster than it used to be (even for our talented engineers).

Web framework

image

Because we already use Angular on the web, we decided to also use Angular for the GUI. Our team is familiar with the idioms, patterns, challenges and in theory elements can be shared. But what’s even more interesting to me is that a frontend developer who’s used to working on the web should be able to jump in on the desktop and feel at home. This consistency is very nice allowing us more flexibility and knowledge sharing. As a side note, we aren’t serving the application from the web, resources are available locally but we will probably start using service workers in the future.

Separation of concerns

Another interesting aspect of our approach is that our logic doesn’t live in Electron, it is common for applications to implement the logic so it runs in the Node.js process. We opted to not do that and instead are running a Go + native code process. The frontend and the “core” service communicate via gRPC. This design decision ensures that the separation of concerns is really enforced. In other words, we are implementing the same architecture we know very well and use everywhere else: client/server. The difference is that the GUI doesn’t talk to the web APIs but instead to a local server (which proxies some calls when needed and handles online/offline states). While that might sound like an over-engineered decision, we are planning on having other applications consume our services and having a local API is a huge win. We consider the app GUI one of the potential clients in an non-exclusive relationship. We can imagine that tomorrow, other applications, plugins or Digital Audio Workstations would want to communicate with our service in a standard, documented and cross platform interface. This separation of concern and design consideration would allow for that.

Unified code base using Go

image

We invested many years in the native applications and some of the details were implemented as we found out edge issues around advanced features. Porting that code to Go isn’t trivial and we didn’t want to block on the port to ship the code. That’s why we still have an hybrid “core” architecture with native + Go code:

image

Connecting Go to original native code allows us to rewrite components in pieces and gRPC allows us to not break the contract between the UI/frontend and the core/backend. We are going to rewrite piece by piece until there is no more native code left. It’s also decently trivial to have Go communicate with native code so we can also still reach out to OS level features if really needed.

To us, gRPC is the most effective way to send data across boundaries, it is comparable to Websockets but with better tooling. gRPC has other advantages that I might cover them in a future post. The short version is that while it is a bit cumbersome at times, it’s be reliable and ensures a thought through API.

The heavy lifting remains in Go and was ported from native code. Go’s simplicity, native concurrency and testability gave us more confidence, reduce the amount of bugs and is empowering the entire backend team to join the fun. Go’s rich ecosystem also gives us access to a lot of interesting libraries for future features. Obj-C, Swift and C# are great languages but they aren’t part of our main languages. I did consider using C#/Xamarin to create a single cross platform app. But we would have needed a specialized and isolated team and we wouldn’t be able to leverage as much of our knowledge and team expertise.

Conclusion

My bet is that bringing the well known cloud architecture to the desktop is the right move for our company. There is a slight memory overhead but I think that by paying extra attention to it and with the help of other community members we are going to keep that to a very minimum. After evaluating the cost, I prefer to pay that short term penalty but have a solid architecture to build upon. I also really think that empowering the team and unifying the tech stack is going to make a big difference. We already saw that when we were able to “surge” UI development. Finally, the design bet is on more than a simple port but a redefinition of what a desktop will become. Having a powerful cross-platform service written in a modern language opens the door to a lot of opportunities.

Shameless plug: If that’s a challenge you’re interested in working on, our team would love to chat with you.


1763 Words

2017-06-06

comments powered by Disqus