Companies across industries are navigating one of the most challenging business climates in recent memory. Economic growth continues to trend well below historical norms, creating pressure to reduce overhead and develop new revenue streams, while the divisive political climate spreads uncertainty across the organization, from marketing to the supply chain.
Perhaps most significant, though, are the ripple effects of advancing technology. Entire industries are being disrupted as never before. Banks face increasing pressure from non-traditional fintech firms. The gig economy threatens to upend the taxi and hotel industries. Tech-savvy firms are making inroads in a variety of arenas, including real estate, insurance and staffing.
On top of it all, a generation of consumers raised on technology clamors for ever-more customized service and convenience.
How can organizations survive—and even thrive—in such an inhospitable climate? The key lies in leveraging what is perhaps their greatest competitive advantage: their data. But they must do so in a more timely, secure and efficient manner.
Accordingly, companies are pushing toward more holistic operations, wielding technology, data and analytics across all relevant functions, including finance, marketing, customer intelligence and business operations. Of course, massive infrastructure development brings with it a multitude of technology risks.
To successfully circumvent these risks, executives must first understand what they are. Traditionally, technology risk is thought of as the repercussions of technology failure, or the potential for such a failure to disrupt business through software or hardware issues, security incidents, natural disasters, or simple human error.
While accurate, this definition is incomplete. Critically, it ignores the harmful impacts of a related but most-often unrecognized technology risk: operational inefficiencies in existing technologies and IT processes. Such risks can be seen in a number of areas and must be managed with the same energy and effort as their more commonly recognized counterparts. For example:
Software architecture: Current architectures are mainly relics from a prior era. Especially damaging is siloed data, stored in different places and often in different formats. Analyzing data to detect fraud or deepen customer insights, for instance, requires that data be moved between environments and translated, which is difficult. Correspondingly, most organizations produce reports monthly or even less frequently, hindering their ability to extract timely insights and act on them.
Tooling: Also suffering from separation complications, legacy systems often come with an expansive set of user tools that may be role-specific or environment-specific. While appealing to end users, a large constellation of applications is not only expensive but also creates complicated update and management processes. Enterprise-wide functionality, if even possible, is only achieved at great expense.
Delivery and maintenance: Deployment is often a slow process requiring manual intervention, particularly on legacy architectures with a range of tools. Combinations of different versions of underlying architecture, tools and code are sequestered in their own environments, and new configurations may require provisioning additional hardware. As a result, most firms can only achieve a few iterations per year, leaving them slow to adapt to changing business needs.
Broadening the traditional concept of technology risk, it is easier to see how most businesses are poorly positioned to effectively navigate change. Their siloed environments are expensive and unresponsive, increasing both their direct costs and potential losses from improper risk assessment and missed business opportunities.
Promises and pitfalls of new technologies
Responding to their evolving circumstances, many organizations are gravitating toward a variety of emerging technologies. While these technologies can deliver significant advantages, they must be used appropriately to realize their optimal potential and return on investment.
For example, perhaps no single technology has been more disruptive than cloud architectures. Cloud-based applications offer metered pricing, scaling, fast implementation and, ultimately, the promise of significant direct and indirect savings. However, despite assurances of vendor-agnostic tooling, cloud services differ in meaningful ways that can compromise organizations’ ability to change vendors. Architectures based on public clouds (including hybrid) also raise information security concerns. In addition, existing architectures are often ill-suited for cloud implementation, forcing a choice between costly redesign or inefficient “lift and shift.”
Similarly, open source tools have also gained popularity, buoyed by a large and active user community. At first glance, they seem like a reasonable way to retain a variety of user interfaces at a minimum out-of-pocket cost. But hidden costs can profoundly diminish their value proposition. Notably, free distributions are typically unsupported, and commercialized versions have not proven economically viable. Further, many open source tools are not intended for enterprise-level integration and struggle to integrate more broadly in terms of data, security and controls. Maintenance can also prove difficult, as packages must be maintained locally and/or synchronized across multiple environments.
Despite the obstacles, companies can minimize technology risk, while still adapting to a modern business environment. Every design is different, of course, but the following fundamental design tenets can help projects across the spectrum:
- Embrace change. Reconsidering long-held processes may seem daunting, but it is helpful to act boldly and decisively to fully accommodate new technology. Porting existing architectures with minimal adjustment will likely only exacerbate existing difficulties.
- Build from the end. To best design the new architecture, rely on the end use as the guiding target. Choose technologies and architectures based on how they support the organization’s business aims.
- See the whole picture. While the ultimate use-case should significantly guide design decisions, do not neglect predecessor processes. Ensure that any adapted technology can function as part of an integrated, enterprise-wide platform able to support both current and future use scenarios.
- Balance. It may be tempting to adopt the newest shiny object, but recognize there is a cost. Organizations need to develop a process for evaluating new technologies and incorporating them into the current computing environment. This also applies to the number of vendors and technologies. Recognize that too many vendors can be difficult to manage. Carefully consider the hidden costs of open source, and purchase commercial software where it makes most sense.
When assessing technology risk, organizations should consider the full spectrum of technology risks—those posed by inefficiencies in technology and IT processes, as well as traditionally well-recognized risks like system failures and security threats.
Today’s legacy environments are being taxed to accommodate new business realities. While new technologies offer tremendous potential to mitigate much of that risk, organizations must use them thoughtfully and deliberately. If deployed as part of a well-considered program, this can result in improved efficiency, greater market opportunities and better returns on investment.