This article is the second part of a two-part series looking at technology's unique intersection with ESG. In the first part, we looked at technology solving ESG challenges; this blog examines technology as an ESG risk.
Today, the cloud has democratized access to capabilities that digitize and network almost all aspects of a business and its operations. Solving core business issues with technology is Digital Transformation in action. For example, a modern manufacturer might gain tighter quality control through a visual AI tool that inspects parts for consistency while networked robots build an entire vehicle in concert.
ESG and sustainability are also core business challenges. As a result, technology and the cloud are used to solve these challenges. In some cases, technology will collect, store, and analyze data about stakeholders, like customers and employees. In other cases, it will optimize operations around things like climate risk and even help ensure sustainable sourcing. This digitization creates mountains of data to drive further improvements in a feedback loop, as shown below.
Still, companies must consider the risks of connecting their business to powerful and instantly accessible technologies. Technology directly intersects with a company's stakeholders, operations, and products and therefore may appear as an issue on its Materiality Matrix. For example, ethical risks can quickly increase with scale, and criminal risks can arise from digitized operations.
Technology as an ESG Pillar
As we look to solve ESG challenges with technology or even modernize our business with Digital Transformation, technology can be its own risk. Let's look at two real-world examples to illustrate this point.
Healthcare is one industry that can benefit from technology. For example, personalized medicine can come from a patient's data; visual AI can help uncover overlooked information on X-Rays or patient data. Hospitals can now serve marginalized rural or urban communities remotely with telemedicine. These are examples of technology solving ESG (specifically S) challenges, but let's pivot to a risk with real-world consequences that emerged from technology's use.
In 2019, Nature reported that a widespread algorithm across major healthcare networks discriminated against b.... The algorithm had a built-in bias that examined the cost of healthcare to decide whether to recommend a patient for additional support. As a result, Only 17.7% of patients that the algorithm assigned to receive extra care were black. The researchers calculate that the proportion would be 46.5% if the algorithm were unbiased. Here, scale to solve a servicing issue created real-life consequences for patients.
People build AI systems and algorithms. As a result, those systems can carry biases forward. A non-digitized system running at a small scale may include biases that might go unnoticed when servicing a few people. Once digital technology and AI is introduced to scale up the servicing, the bias also scales up. This issue appears to be what happened in this case.
The second example illustrates a criminal risk that can arise from networked systems. Here, we're talking about Cybersecurity. Per IBM's 2021 Cost of a Data Breach report, companies would rather pay for remediation of a Cybersecurity incident than address it in the first place. Having spent most of my career in IT, I can attest that Cybersecurity was often viewed more as an insurance policy than anything else.
An unsecured VPN account and a shared account password found on the dark web allowed the Colonial Pipeline to be hacked. As a result, the pipeline shut down for days, which impacted millions of US customers. At $4.4M, the payout amount was close to the average cost of a breach cited in IBM's report. Here, technology represented a financial risk due to the networked operations, resulting in double materiality. In other words, there was the financial cost of the breach and broader economic repercussions.
The US government quickly addressed the issue across the industry by issuing new Cybersecurity regulations for pipeline providers. The government doesn't always get involved in every company or industry that gets hacked, but it might when the risk is systemic. As cybersecurity threats increase, the SEC has issued a proposal for Cybersecurity disclosures. Reviewing the SEC's language, it is very close to ESG, including terms like risk, materiality, and decision-useful information.
Work Technology like any other ESG pillar
Our two examples illustrate the power of technology's impact on people. An algorithm put patients at risk in one case, and the other hit consumers in their pocketbook. Both issues could have been avoided with good governance and controls around technology.
Let's revisit our steps to identify and mitigate ESG risks from part 1 and apply them to these cases.
In the healthcare AI example, patients are the stakeholders. The impact manifests through consumption of the healthcare services with the influence on patient referrals and community engagement. The value creation is a compelling case, a healthy patient.
When looking at the opportunity for technology and Digital Transformation, it's also critical to consider the risks. In the healthcare example above, the scale from technology to better serve the patient was the opportunity. The trouble was that existing biases were scaled up, creating significant health risks for patients. In looking at the example of Microsoft's Materiality Matrix, which maps the importance of business success to stakeholders, this should have gone in the upper right corner to Prioritize.
It can be easier for technologists to explain mitigation efforts when technology is thoughtfully considered and reviewed. For example, the implementation of AI must include responsible principles and continual review because AI ethics are material considerations for any company using AI. Microsoft's AI Business School is a great place to start as it includes modules on this topic.
Regarding the Cybersecurity example, the stakeholders could include both employees and consumers. Employees need to be educated on Cybersecurity risks, and clearly, consumers were impacted. I believe the materiality might be somewhere towards the middle of the matrix in this case. The issue must be Prioritized, but should also be monitored, may have compliance concerns with the new SEC proposal, and should be managed over time.
Again, identifying the risk is the first step. Simple pre-emptive mitigation of Cybersecurity issues and resilience planning could have helped mitigate the risk. As double materiality, reputational risks, and regulations are better understood, companies will drive budget towards mitigation efforts, like Cybersecurity.
It's important to note that ESG risks and these technology risks don't necessarily ever go away. ESG is an evolution, not necessarily a journey with a clear end state, even at the issue level. Both AI and Cybersecurity represent ongoing risks because technologies are continually in use, new systems come online, and the world keeps changing. A company may move ESG issues to the lower left as the initial efforts build for mitigation, but the issues should remain plotted risks.
In part 1, we covered how technology is applied to solve ESG challenges, create new experiences, network systems, and find new insights. However, in part 2, we discussed the risks inherent in digitizing business, and how a thoughtful approach that accounts for stakeholders and materiality is key to mitigating risks that can arise from technology.
In "Tools and Weapons, The Promise and Peril of a Digital Age," Brad Smith wrote: "The issues of the twenty-first century require initiatives that are both multilateral and multi-stakeholder in scope." Understanding ESG and technology issues are critical to business resilience and require companies to develop new muscles. Thoughtful consideration of ESG issues alongside technology can help a company control its risks before they develop into a crisis.
For more information on ESG, check our whitepaper "Accelerate Sustainability with ESG Insights."