The Republic of Agora

Realise Service Integration


First Steps Towards Realising Integration as a Service

Sidharth Kaushal | 2024.08.30

This conference report is based on the discussions at a one-day workshop held in September 2023 to identify early opportunities to set the conditions for integration across the joint force.

Introduction

This conference report is based on the discussions which occurred during a one-day workshop held in September 2023 at RUSI’s headquarters in London, attended by a range of representatives from defence companies and the UK Ministry of Defence (MoD). The purpose of the workshop was to identify early opportunities to set the conditions for integration across the joint force. The workshop examined where and how the MoD, the armed services and, especially, UK Strategic Command (StratCom) and its new Integration Design Authority (IDA) can achieve immediate results to galvanise the broader effort to deliver the stated aspiration of providing “integration as a service”.

The workshop focused on the questions of how different actors, both past and present, have approached the challenge of data integration, and sought to examine which transferable lessons might be drawn from precedents from both the world of defence and beyond it, in areas such as the financial sector. This report, based on the workshop and subsequent secondary literature review, examines the immediate steps that the MoD can take to set the conditions for a broader effort to achieve multi-domain integration.

A key deduction from the day’s discussions was that if, as is likely to be the case, software-driven evolution will play an important role in determining how the joint force fights, perhaps to an even greater degree than hardware adaptation, then it will be of considerable importance that frontline commands (FLCs) are able to cohere capability at all stages, even as the software is continually adapted.

It was noted that traces of this change are already visible in operational theatres such as Ukraine, where software can and must be adapted on a six-weekly basis to stay ahead of adversary cyber- and electronic-warfare capabilities; this rate far exceeds that of hardware replacement or adaptation.

If the joint force is to operate across domains, FLCs must pursue iterative change in a coordinated manner. Consequently, the specific focus of the workshop was on achieving common data standards within defence. The workshop and subsequent research relied on historical case studies from both the UK and other nations to identify the drivers of success and failure in integration efforts.

The day’s discussions also examined the development of civilian networks, which have succeeded through the existence of standards sufficiently broad to enable change but well defined enough to allow for interoperability. The lessons derived are applicable to other areas where integration and standardisation of capabilities across all services is a priority.

Key Findings

Key findings surfaced from the day’s discussions, and reinforced by a review of secondary literature, are as follows:

  • There is limited precedent for successful integration on a top-down basis. Moreover, success has often involved procedures which – by virtue of their secrecy – existed outside normal procedures.

  • In several areas there are strong service- and defence-level incentives to collaborate, but no individual service owns responsibility for an outcome. This separates operational from financial risk. Helping resolve these “tragedy of the commons” areas would allow the IDA to work with, rather than against, the grain of service imperatives.

  • A common set of foundational standards which FLCs can use to drive the process of integration could serve as a starting point. Data standards are a key part of the foundations. Achieving standards in one area can create functional spill-overs (where integration in one area makes it difficult to function without integration in other areas, leading to a cascading process that eventually requires no top-down supervision).

The Challenge of Integration

As was discussed at the workshop, a number of factors can pose challenges to the effective integration of capabilities across the joint force. Among them are:

  • A tendency towards feature creep – the expansion of an integration effort to include more tasks than can reasonably be accomplished, thereby increasing complexity.

  • An over-emphasis on technical capabilities, as opposed to specifying the operational requirements that justify the costs and complexities inherent to integration.

  • Non-adoption or partial adoption of standards once they are created.

  • Standards that act as barriers to entry for new capabilities because of their complexity.

During the workshop, it was noted that these challenges have been prominent features of previous UK efforts at joint force integration, particularly the Network Enabled Capability (NEC) programme, which began in the 2000s. The challenges that the UK has faced with the adoption of the NEC are in certain respects emblematic of the issues one may expect to encounter. As pointed out by one of the participants, the NEC was a victim of the fact that FLCs were not involved in articulating operational use cases. Moreover, the programme was not closely aligned at the enterprise level. Ideally, it would have encompassed both the relationships between different acquisition lines and areas such as operator education. It was noted that because NEC-related programmes were not situated within an operational use case, the costs were harder to justify within individual services where the benefit was not as great as for others. As a result, when services had to make trade-offs between individual NEC capabilities (such as a cooperative engagement) and other lines of effort, NEC capabilities were often sacrificed, scaled back or only partially adopted.

Case studies discussed at the workshop, such as the US Air Force’s Airborne Battle Management System (ABMS) programme, illustrate another major risk that integration efforts face: feature creep. The programme, which began as a replacement for the E-8 JSTARS, morphed into a wider effort to deliver an internet of things. While this development was not problematic in itself, the fact that use cases were driven by programme executive offices led to uncoordinated feature creep and cost increases.

In those instances where effective top-down integration was achieved, notably the US Navy’s NIFC-CA programme, it was enabled by two things. First, a clear single service chain of command within a well-defined mission set allowed the US Navy to control requirements and enforce them. Second, as observed by a workshop participant who was involved in the rollout of the programme, the heavy classification of the programme meant that decision-making was restricted to a small number of individuals who had the authority to make and enforce systems engineering trade-offs (even though, in many cases, the reasons for specific requirements were not communicated to those charged with implementing them). Narrowing the group responsible for making decisions made it possible to avoid feature creep. Once established, requirements were imposed on engineers with little room for consultation, given the classifications involved, while the decision-makers’ authority could not be challenged. It was noted at the workshop that the ability to develop stringent requirements and ensure their adoption without pushback was only possible because many existing procedures for procuring and integrating capability were circumvented in a programme which was subject to highly centralised (and specific) processes.

Participants discussed the fact that that these criteria will be difficult to achieve outside very specific contexts. So instead of integrating specific platforms, a different approach to integration might be taken – one that uses the authorities of bodies such as StratCom to solve emerging tragedies of the commons, over which no individual service has effective control or responsibility. Participants agreed that where StratCom can add value as the “strategic integrator” is not by solving specific integration challenges as an external part of the programme, but rather by creating toolkits that allow integration to be driven by MoD Finance and Military Capability (FMC) using existing management systems.

The following sections describe working use cases which might become a basis for focused support to cross-service integration by StratCom.

Opportunities to Enable Cross-Service Integration

Several solutions emerged from the day’s discussions. First, the IDA could provide a mechanism to cohere capability if it more systematically informed the Defence Capability Risk Assessment Register (DCAR) process. Although the IDA does not have budgetary authority, it can provide FMC with information about which capability gaps can be closed through integrating existing or likely capabilities and which require additional capacity rather than integration. Through the IDA and leveraging its responsibilities for Defence Digital, StratCom can generate an information base regarding data standards, which FMC can then use to insert requirements into specific programmes. This might be analogised to the way in which consultancy firms are used by governments to fill gaps in both expertise and capacity. With a staff drawn from across the services and the capacity (through the Permanent Joint Headquarters) to assess the operational use cases for individual service capabilities, StratCom can provide FMC with the information needed to inform decisions regarding data standards.

As was discussed at the workshop, the importance of a feedback loop between StratCom and FMC will become greater in the medium term, as the demands of integration will increasingly affect hardware. Most concepts for distributed operations, such as DARPA’s “STITCHES” Initiative, introduce considerable requirements for processing power to enable network integration and the translation of data at the tactical edge. This will in turn introduce size and cost requirements on platforms which StratCom and FMC can only drive in tandem. A model which is initially applied to data- and software-led development can then serve as a microcosm for a more ambitious system which will be needed in a 10-year timeframe. The model can enable StratCom and FMC to develop the procedures needed to coordinate with each other and the FLCs, and can introduce the services to new practices.

Creating a Market of Standards

During the workshop, it was noted that delivering the integration identified above could be facilitated immediately by creating a market of common standards which services can opt into for specific functions. In this way, StratCom could set the stage for future integration.

Notably, if software-led integration is a priority, flexible standards will be essential. This has been observed in civilian networks, which are able to maximize their effectiveness through a combination of relatively specific bearer standards and much more flexible standards for transport and middleware layers of a system.

Participants also observed that there is a growing body of evidence to suggest that backwards integration with open source software is possible, with examples from the world of integrated air and missile defence particularly prominent. It is possible to define broad parameters within which messages must fit and to then rely on middleware to translate data across the different formats.

Participants noted that not all tasks will require comparable levels of data standardisation. For tasks such as electronic warfare, for example, the requirements for rapid updating and the stringency of security requirements impose a need for well-defined standards, which limit the amount of time needed to translate data across formats. Similarly, a particularly stringent set of standards could be adopted by the FLCs to share F-35 data while they might select different standards for other functions. Conversely, in other cases, such as joint logistics, a more flexible set of standards might be applied.

One key determinant of the degree to which standards are necessarily stringent will likely be the degree to which network compromise can be expected, as pointed out during discussions regarding the ongoing war in Ukraine. Where network compromise is highly likely (for example, because platforms will operate within reach of adversary electronic warfare capabilities), it will probably be the case that data will need to be packaged in ways that enable encryption while balancing the trade-offs involved between encryption and latency. As was discussed at the workshop, this will be of particular importance when a decision is made to exploit civilian bandwidth for certain functions – the relative lack of security of the network layer makes the security of the messaging layer all the more important. In other instances, compromise may be likely but accepted as the cost of scalability – for example, when commercial off-the-shelf UAVs are being incorporated into force structures. Finally, there are some instances where the security of a network against different modes of compromise is sufficiently robust that risk can be accepted at the level of data. One example is communications using millimetric wave frequencies.

Both discussions during the workshop and existing research suggest that the experience of organisations such as the International Organization for Standardization lends itself to the argument that a diverse set of voluntary standards can be effective in allowing agents to select standards that are best suited to enabling their operations. Rather than driving the process, StratCom and the IDA could generate multiple options which can be relevant to specific multi-domain tasks, from which FLCs can jointly select based on cross-service consultation. By resolving the informational challenge of generating options, Stratcom can narrow the set of options from which services can choose and provide them with an incentive to choose from the options it provides by removing the requirement to generate standards by themselves.

A conclusion reached by participants was that where StratCom might add value is through the provision of a typology of trade-offs. Data standards could be categorised on whether they are designed to allow access to multiple network types, reduce latency or increase security – mindful of the fact that one can typically only achieve at most two of these three ideals. FLCs could then justify trading off one priority against others based on the demands of a specific operational requirement. For example, if forward reconnaissance elements of the force are expected to operate in communications-denied or -disrupted environments, they might opt to trade latency for the ability to securely use multiple modes of communication. Levels of encryption sufficient to pass data along multiple types of networks with different levels of vulnerability to compromise necessarily impose demands in terms of the TTPs and the size of data packages, meaning a cost in latency which precludes certain things such as the frequent use of full video links.

Case Study: Distributed Tactical Ledgers

The final part of the workshop was based on group discussions of potential use cases. One area where participants identified the potential to achieve “quick wins” was the development of a system comparable to those based around distributed ledgers that have emerged in the world of finance. The key points that emerged from discussions around this use case are outlined below.

Any system that requires coordinated collective action requires agents to have information about each other’s whereabouts, capabilities and obligations. For example, if an air defence interception is to be performed on a collaborative basis that might involve an Army-operated GBAD system, an RAF combat aircraft and a ship, it will be necessary to know what capabilities are held on each platform, how well suited they are to the task of an interception and how valuable it would be for each platform to perform the interception as opposed to another function. In other words, ledgers of both capabilities and the value of using a given capability in a specific way would be needed.

As was discussed in the breakout sessions, awareness currently existing at the theatre level includes the Recognised Air Picture (RAP) at a Combined Air Operations Centre (CAOC). However, if envisioned concepts of distributed operations are to be achieved, this awareness and capacity for collaboration must be pushed to the edge. One prerequisite for this is a shared set of standards by which information about available cross-domain blue forces and tasking orders can be shared locally.

Participants in the breakout sessions agreed that the ability to task capabilities in a decentralised manner would require, at a minimum, both the ability of units to communicate their availability to nearby units from across the force, and a basis on which tasking requests might be either accepted or rejected. In the civilian world, applications such as Uber accomplish this through the broadcasting of data by taxis and the use of a price mechanism. However, this model is not directly transferrable to a military context. The constant broadcasting of data represents a security risk, while decisions about what a platform is tasked to do typically reflect the allocation of resources to a task by a higher authority through, for example, an air tasking order. However, many emerging concepts of operations, such as Joint All Domain Command and Control (JADC2) and Mosaic Warfare, presume a comparable dynamic recombination of assets, with systems sometimes being tasked to a zone rather than a specific function. In a military context, communications would likely need to be local rather than broadcasted, because broadcasting availability creates risk and would rely on directional transmissions which, in turn, create a requirement for blue force tracking which would be difficult to assure in GPS denied environments. The requirement for accurate blue force tracking stems from the fact that to share data safely, as F-35s have done with US Marine Corps HIMARS batteries in tests, systems need to use Link 16 on directional antennae rather than relying on omni-directional broadcasts, which would reveal their positions. Directional transmissions require an accurate understanding of the location of a receiving antenna, which has been achieved in tests but which would be difficult to deliver in conflict. The need for blue force tracking incentivises an effort to leverage multiple networks, but this must be balanced against the requirement for security. In effect, there is something of a trade-off between the flexibility of a system in terms of the network it uses and the data used. The more flexible a system is at the network level, the more stringent data standards must be.

Such a system would necessitate a shift from mission command to mission definition at the edge, as operators would exercise control over not only how they executed preset missions but also which tasks they opted to support. For example, an aircraft over a part of the battlefield for the purpose of SEAD may face a choice between engaging a SAM or broadcasting data to enable another system such as a GMLRS to engage another target. This requires the pilot to know how valuable the alternative target is regarding the ground forces mission set as opposed to the SAM, which could be otherwise engaged as part of the SEAD mission. In effect, a military analogue to a pricing mechanism is necessary for multi-domain operations, but it must be secure.

The coordination of capabilities will require network standards that are sufficiently flexible to enable the use of multiple pathways, as well as data standards that are stringent enough to allow for tasking requests to be communicated securely, since multiple networks of varying reliability are being used in the face of adversary compromise. Also, a mechanism for assessing whether the application of a system to a task is appropriate given the system’s availability would be necessary.

While this task is both complex and a multi-level system engineering task, the full extent of which should be the basis for subsequent discussions a first step towards such a system might involve delivering:

  1. A distributed cross-domain system for tracking blue forces and delivering tasking orders without communications to higher echelon.

  2. A distributed system for assigning value to tasks.

This system of distributed ledgers would at a minimum require shared standards for indexing data about locations, capabilities and values, as well as voting protocols to enable changes to locations and target values without systemic risk. There are multiple network types which can support such a system. Area-wide communications networks, including civilian systems such as Starlink, might represent one mode. Alternatively, each platform in an area can communicate locally to update peers on its position, enabling them to pass this data further on to platforms adjacent to them. This could enable data to be “daisy chained” between adjacent systems to allow for broad situational awareness to be achieved without the constant broadcasting of data. Local communication with adjacent nodes can be achieved with low wavelength communications frequencies that are less susceptible to compromise.

In certain instances and use cases, particularly where situational awareness is desirable over a broad area (for example, in the context of distributed operations in the maritime domain), it may be desirable to accept stringent standards of encryption as a means of utilising multiple network types. In other instances, for example within the land operating environment, the relative proximity of multiple units may allow for the use of mobile ad-hoc networks to enable the daisy chaining of data using less easily compromised modes of communication such as millimetre wave radios or tropospheric scatter – in turn allowing for more flexible data standards that allow lower latency and a lower requirement for operators to generate encryption keys. Security in this context is delivered by the network and not the data, and the relative simplicity of the data being transmitted (which allows for low latency) does not need to be lost due to a requirement for padding.

Insofar as such a system does not exist, it can be directly shaped by central bodies in a way that creates a common good around which service-level lines of effort would adapt. Generating a set of differentiated data standards that might underpin such a system would be likely to secure service-level buy-in for several reasons. First, it simplifies the task of selecting standards for individual services. Second, such a system would not necessarily require services to immediately grapple with the intricacies of networking platforms to generate complete situational awareness. Rather, it would require the ability to receive and transmit simple data regarding the presence of friendly platforms and the value of any given adversary target across a shared indexing system. Third, a broad market of standards could allow different trade-offs to be weighed by services based on the tactical imperatives of one or more services in a specific context. Generally, the acceptance of standards tends to depend on the number of impositions that standards create. The more intricate a set of standards, the less likely it is that buy-in will be achieved. A flexible set of standards which is only restrictive on issues that directly pertain to security can circumvent this issue and, importantly, allow tasks to determine approaches. Because services can agree standards based on their appropriateness to a shared cross-service task, this can secure support, because the need for a given standard is rooted in an operational requirement rather than a top-down diktat. This bottom-up approach has characterised previous successful efforts at cross-service coordination, such as the 31 initiatives between the US Army and US Air Force which led to the emergence of AirLand Battle.

Conclusions

Several deductions emerged from the discussions held as part of the workshop. Of greatest salience was the suggestion made throughout the day that, as it approaches the task of integrating the force, StratCom will benefit from an approach that builds from relatively modest goals, but with a clear sense of where it is heading. It must eventually achieve the following aims:

  • Establish organisational procedures which allow it to determine integration requirements in light of operational demands.

  • Establish a shared set of routines for coordinating with FMC.

An effort to establish shared data standards could serve as a starting point. Standards need not immediately impinge on service-level prerogatives regarding platform-level decisions and can be approached in a flexible way, as illustrated by voluntary standards markets. The tasks for which standards are sought might initially involve less stringent capabilities, such as a shared indexing system.

The creation of a range of options can be of greater utility than the imposition of standards, not least because standards imposed by fiat are often opposed, and moreover tend to create perverse incentive structures. By contrast, the creation of an option set allows one or more services to explicitly weigh trade-offs and justify choices made in light of the trade-offs acceptable in the specific operational contexts in which the assets of more than one FLC interact. On the basis of the system and precedents set, StratCom and FMC could begin to articulate a more coherent system and division of labour, which could then be applied to more complex systems engineering challenges.

Tackling the relatively modest task of creating a flexible set of data standards and solving broad tragedies of the commons could, then, create the conditions for a more ambitious future approach to integration.


Sidharth Kaushal is Research Fellow for Sea Power at RUSI. His research at RUSI covers the impact of technology on maritime doctrine in the 21st century, and the role of sea power in a state’s grand strategy.

Made with by Agora