Digital technologies are increasingly integral to how we provide talent development and performance support. For example, medical mobile apps can provide physicians point-of-care access to relevant drug and clinical information; conversational chatbots can deliver a more personalized way of learning by pushing content out to learners in the flow of work; and data analytics in the form of visual dashboards can centralize data points collected across various learning and performance-related activities, giving busy managers a high-level view of what goes on in the company.

When used well, these tools can detect useful patterns, provide new insights and help with decision-making. However, despite the creators’ and implementers’ best intentions, there remains a gap in business between the desires to close the talent gaps and the potential to cause harm.

In one such case, Microsoft 365 has a productivity score feature that allows managers to evaluate individual-level employee data, monitor activities and track where they spend their time – particularly during the pandemic when the majority of people are working from home. However, ever since its launch, this tool has garnered much criticism, and has been deemed full-blown workplace surveillance. After much backlash, Microsoft finally made changes and removed the individual names and only provide aggregate data at the organizational level.

How can we as learning professionals who evaluate and utilize these tools think more deeply about their impacts?

Be Informed About Technology Ethics

Learn as much as you can on various aspects of technology ethics and ask questions such as: What are we trying to achieve with these technologies? Are the technologies able to solve the problems we identified, and at what cost? Who will benefit and who will be disadvantaged by the change? What kind of assumptions are we making?

Keep in mind that we do have choices in the technologies we implement and use. As an example, artificial intelligence (AI)-based conversational chatbots are implemented for coaching and mentoring sessions. Do we know if there are any assumptions built into the way the chatbots respond to people from different cultural backgrounds? Did it disengage and demotivate people as a result of biased information (or lack of relevant information)?

Ensure Technology Governance

As organizations select and adopt new technologies, governance is rarely at the forefront of the project. While it is tempting to implement now and regulate later, it is important to have a governance framework in place from the beginning and throughout the adoption process. This is particularly critical for new and emerging technologies such as AI and machine learning since little research and few case studies have been done on their impacts, risks and any unintentional consequences. More so, when they touch on issues of privacy and mass surveillance within an organization.

For example, many talent development tools have a predictive function that managers can use to gauge people’s performance potential and make decisions accordingly. What guidelines and regulations can we use to apply these predictions in a responsible way?

Practice Responsible Usage of Technology

While it is important to have technology governance in place, it does not guarantee that more ethical behavior will follow. Organizations need to adopt a responsible approach to ensure the benefits outweigh the risks, and that the technologies are there to support rather than undermine people’s autonomy.

To sum up, creating and sustaining a culture of responsible use of performance support technologies must be a deliberate, thoughtful and iterative process. It will require efforts from across the organization, but it is a journey that is well worth taking.