Shared Code at Scale: How to Accelerate Your Team
For high-velocity development teams, shared code is considered a best practice.
What Is Shared Code?
Shared code involves leveraging bits of code that have already been created to be reused in other projects by other teams. Teams don’t have to spend time on problems they have already solved. Using shared code can create consistency across teams and projects because you have a unified way to build projects.
For big projects and large teams, it is more than just simply copying and pasting a few lines of code. So how can teams share code at scale?
How Teams Share Code
Teams use different methods to share code. On a smaller scale, a team may:
- Reuse code within a single project.
- Share code across several projects.
Because all of the shared code is still within a single team, it is easier to manage. These teams usually know the code inside and out and frequently communicate. Documentation about what code is shared can be in a simple README file. Larger teams can use external collaboration tools.
As teams continue to scale, shared code becomes more complicated. In an effort to accelerate development and build bigger projects, more teams need access to shared code. Companies may have a:
- Small number of teams working on a few projects where code is shared across.
- Complex infrastructure of shared code modules that are shared within a team or project or across several teams and projects.
Challenges Working With Shared Code at Scale
At scale, shared code involves packaging files into components and modules and making them available for other projects and teams. Although this has a lot of benefits, it does present challenges. These challenges are amplified at scale.
Organizing a shared code model involves managing dependencies between separate collections of code. If you change something in a component, library, package, or module that is used in several products, you need to understand the potential impact. How will this change work with all of the related products? Your architecture helps determine how these dependencies are managed. This can be done in a variety of ways.
Shared libraries implement modules as versioned packages. Teams can then pick and choose which modules go into their products. At the beginning, this is easy to maintain. But technical dependencies grow when products start using different versions of those libraries.
When developers, CI/CD systems, or build processes update the library’s version, you need to test each product/asset/deliverable. And because most version control systems usually don’t store packages, you are relying on other separate tools and processes. These can help manage your project and library dependencies, but also can complicate your CI/CD pipeline.
Google and Facebook use a monorepo architecture. This helps them share code by having it all in one place, reducing external dependencies. Developers can simply access the information they need in the same repo as their project.
Using a monorepo can help give teams visibility into projects, allowing them to collaborate easier. But you need to make sure that your tools can support it. Because you are working with such a large codebase all in one repo, a stable CI/CD pipeline needs to be implemented. It needs to ensure unit, smoke, and integration tests pass consistently. You also need high performant tools that can support large downloads and merges, so developers aren’t delayed.
Microservices architectures use APIs to communicate between loosely coupled services. Each one is individually maintained. Often these services are kept in their own separate repos. This helps teams focus and maintain only one specific part of the codebase. As a result, developers have a deep familiarity with the service(s) they're creating or maintaining.
Breaking up code can help with performance because developers do not need to copy large repos. But when it comes to where their changes flow, they need to worry about applying them to separate repos. For CI/CD, this can create massive dependencies over time.
Using microservices takes a lot of resources — testing frameworks, DevOps engineers, additional software, etc. — to manage. Services are on different versions and need to be tested together. Developers don’t host the external API locally. So they need to rely on their network and infrastructure for the API to be online and accessible. Admins need to make sure that everything is up to date and working correctly so people are accessing the right shared version. Architectures, schemas, and system integrations need to be meticulously documented and communicated to avoid API incompatibilities.
All of these architectures create some kind of dependency. You need to make sure your tools give you visibility into want is happening in the code. Otherwise you are relying on external, manually maintained documents, or worse, a physical whiteboard that may already be out of date.
Stability with Shared Code
When you start adding new team members, you increase the number of daily commits. These commits are usually no longer to a single shared branch. As teams grow, your branching strategy and code review strategy need to evolve.
As more changes enter a codebase, you increase the risk of immature code slipping in. Using a shared code model requires teams to rigorously review and test code. If you have your codebase split among repos, you have dependencies that need to be tested together. In either case, you need to make sure to balance speed with quality to ensure code remains stable.
Keep your merges small. Making them very specific helps teams track down issues should they arise in the future. And of course, merge early, merge often.
Supporting Multiple Releases
Shared code can also create headaches when it comes to supporting releases. Your teams need to work on code that is shared across teams and projects. It can be almost impossible to track down a bug rooted in an older version. Make your sure developers are merging the right way. And your tools need to run regular tests.
How to Use Shared Code at Scale
How can your teams conquer these challenges and scale?
Complex architectures and accessible libraries are critical to effectively using shared code. They require tools that support a large amount of assets being shared, enhance visibility, and support your release projects.
To see how projects, modules, and components are related, companies often look to documentation and admins. But these rarely help solve merge conflicts. Developers need to see where their code fits with all of their team members. Although a lot of tools are used, version control is the foundation.
How High-Velocity Teams Leverage Shared Code
Helix Core — version control from Perforce — helps teams share code. Because with Helix Core, developers always know who is working on what. Using the Helix Visual Client, developers can exclusively check out code, see who has code checked out, and view all their digital assets (including Git repos) on one screen.
How Does Shared Code Work With Perforce?
Complex and large-scale projects are easier to manage because teams always know how changes should flow between projects. All your digital assets — including packaged files if desired — are stored in one high-performance server. This gives you a unified build and a single source of truth for your CI/CD pipeline. Teams can code and bring everything (including Git assets) together. Plus, Helix Core has the performance to deliver large files to team members fast.
Features like Perforce Streams also help support your architecture. Using the Stream Graph, you can see what branches, releases, and projects are waiting for changes. Stream Depth can help you customize further by keeping your components separate. Then they can be added into your pipeline and easily shared across projects and teams.
Get Started With Helix Core
See for yourself how Helix Core will help you share code faster. You can get started for free for up to 5 users and 20 workspaces.