A leader’s take on the six dangers of ChatGPT for organisational leadership

Paul Aladenika
5 min readMar 19, 2023
Image courtesy of Zac Wolff on Unsplash

This is the second of a two-part blog on the artificial intelligence (AI) tool ChatGPT. The first blog was a leader’s take on six applications of ChatGPT in organisational leadership.

If you have already tested the functionality of ChatGPT, I am sure that like me, you would have been greatly impressed. For what it offers to the wider world of organisational leadership, ChatGPT is a game-changer. However, as much as the scope and scale of the product and its myriad of potential applications, provide huge opportunities; they also raise numerous legitimate concerns.

To better contextualise these concerns, I would encourage you to visualise three models of organisational engagement with ChatGPT. The first of these models are the early adopters, who will enthusiastically embrace and eventually surrender significant areas of organisational autonomy to the product. The second are the pragmatists who, acceding to inevitability and keen not to be left behind, will incorporate ChatGPT into their organisational modernisation programmes. The final group will be the cautious and careful, who will be much more deliberative about the benefits of the product, whilst being constantly alert to the potential risks.

With the contextual framework set, here is a leader’s take on the six dangers of ChatGPT for organisational leadership.

1. The ‘Zelenskyy effect’

I asked ChatGPT to indicate whether AI has any role to play in complex issues requiring crisis leadership. To mix things up, I cited Ukraine President Volodymyr Zelenskyy as an example of how logic can sometimes be confounded. ChatGPT acknowledged that the human qualities involved in leadership are difficult, if not impossible, for AI to evaluate alone. It added that relying on AI to judge the suitability of a person for a leadership role, in high stakes situations, could lead to unintended consequences and poor decision-making. This is the most obvious danger for any ChatGPT’s romantics out there. AI lacks appreciation of the human capacity to adapt. For organisations, incautious enough to defer to ChatGPT without questioning, take note: artificial intelligence is no match for human ingenuity.

2. The ethical smokescreen

Ask the ChatGPT language model to make recommendations that run counter to its ethical framework and your request will be met with a polite and well scripted rejection. However, as I discovered, if an organisation wanted to exploit the tool to receive a particular answer to a recruitment query (no matter how unethical), it only needs to frame the question in proximal terms and ChatGPT, will give them the answer they are looking for. Intelligent, it may be, wise it is not. As it stands, the capabilities of the software are simply not astute enough to sus out ulterior motive, much less ill-intent.

3. The rise of the Übermensch

Anyone who has been involved in leadership development will attest to the fact that it is a process of painstaking graduation, not rapid acceleration. They will also attest to the fact that at its best, leadership development produces diversity not homogeneity. However, my exploration of ChatGPT reveals that it lacks the sophistication needed to fully understand this complexity and ambiguity. Therefore, organisations inclined to rely on ChatGPT’s recommendations could unwittingly set the bar too high or define leadership capabilities in terms that are overly prescriptive, making ascension into leadership roles unattainable for some.

4. The tyranny of artificial logic

If ChatGPT were an adolescent, you might be inclined to call it a ‘smart mouth’. It seems to have a response for everything, which at some level can be quite reassuring. Notwithstanding, to get the best from ChatGPT you cannot afford to be incurious in the face of the tidal wave of carefully crafted insight that will assail you. There is an eery certitude about the way in which ChatGPT recommendations are presented that can be quite intimidating to the unrehearsed and unresearched. Therefore, if you are not prepared to thoroughly test its reasoning and examine its facts and push back, ChatGPT will walk all over you.

5. Bias confirmation

I have sat in more than enough interviews to know that there is an unfortunate, but natural tendency towards bias in the recruitment process. Recruitment bias even has an unfortunate pre-fix ‘unconscious’. Whilst ChatGPT is luxuriously sophisticated in so many respects, everything about it suggests to me that it will succeed in confirming existing biases. Not because it pushes one in a particular direction, but because of its tendency toward caveat, which will leave just enough room for those shopping around for reasons to say ‘yes’, to say ‘yes’, and those who want to say ‘no’, to say ‘no’. For some organisations ChatGPT will come out of the box, whenever convenient and put back again when it is not.

6. Outsourced thinking

Have you ever received a report or briefing where the author appears to have selected the ‘change all’ option on the spell-checker without checking the corrections? The risk with ChatGPT is that some organisations will be seduced by the idea of outsourcing their decision-making to advanced software. When those short of time encounter the seductive, due diligence is the inevitable casualty. Organisations that tread this path are likely to be those at the higher end of the AI dependency scale. Therefore, their rationale will be predicated on the assumption that if AI can do your thinking for you, then you don’t have to do it for yourself.

So, all things considered, do the benefits of ChatGPT as a tool for organisational leadership outweigh the risks? That will depend on the way in which each organisation deploys it. Doubtless some will be more forward leaning and find innovative ways to utilise the product, whilst others, perhaps unsure as to its strategic value, will take longer to get to get to grips with it. Notwithstanding, I think the greatest danger of ChatGPT is that over time, human decision makers will opt to step aside and defer to the software. Ask it for an opinion and even ChatGPT will affirm that its best utilisation is as an aide to human decision-making, not as its replacement.



Paul Aladenika

Believer, TEDx speaker, host of The 11th Thing Podcast, blogger, mentor, student of leadership, social economist & thinker. Creator of www.believernomics.com .