This month, Ruth Kaufman examines professional codes of conduct after receiving the CSci accreditation; the new Journal of Business Analytics editors tell us what to expect; Malcolm Fenby gives us a considered comparison of the deaths from breast and prostate cancer; and we revisit this year's Beale Lecture, which sparked some lively debates.
More Than Just the Letters
RUTH KAUFMAN OBE FORS FIMA CSci
Blessing Mutanga, David Croisdale-Appleby and I have at least one thing in common. We are the first three people to be awarded Chartered Scientist (CSci) by The OR Society, following the Science Council’s licensing of the Society as an awarding body. Which means we are also the first three people to sign up to conform to The OR Society Code of Conduct.
The OR Society has long had a ‘Statement of Ethical Principles’, but this is by way of support and illumination, to help people understand what ethical conduct looks like. It has never been mandatory. For us CScis, however, if anybody complains to the Society that we have breached the Code, we will have to go through a process that may result in removal from the CSci register.
Does ‘mandatory’ make a difference? If we do not abide by ethical principles, we may (if found out) earn the contempt or mistrust of our peers, or disciplinary action from our employer, or loss of trust from our clients, or sleepless nights pricked by conscience. These all seem to me a good deal worse than losing a few letters from after my name. Nonetheless, explicitly signing up does concentrate the mind on what it means to be ethical, and encourage us to consider whether our behaviour really does match up.
At first glance, the Code looks quite easy for a decent professional to comply with. It has only four main requirements: accuracy and rigour; honesty and integrity; respect for life, law and the public good; and responsible leadership. Who wouldn’t claim to comply with these? And indeed, the Code is easy to comply with – in the same way as, when I worked at London Transport in the 1970s, we agreed that it would be easy to run the buses on time if only there were no passengers. The real world will keep getting in the way.
Many of the dilemmas and challenges of real-world OR have been explored in the literature on ethics and OR. What do we mean by ‘the public good’ when there are different ‘publics’ with incompatible needs, when short-term benefit has long-term costs or vice versa, when the powerless have no voice, when outcomes are uncertain? How accurate should ‘accurate’ be, when the budget is limited and the value of improved accuracy is unknown. How much time should you spend conveying caveats to your client, if the client doesn’t have the time or inclination to pick up on the difference between ‘imperfect’ and ‘useless’. If you haven’t yet looked at this, Ormerod and Goodrich’s overview of the literature is an excellent starting point.
In my own practice I have been particularly perplexed by the dilemmas that arise from the social world in which we operate, and the relationships and obligations that we pick up on the way. To name one: managerial obligations. If you are leading a team, they become stakeholders in your decision-making. If you challenge a client for improper use of the modelling, will it be the team that bears the brunt of subsequent withdrawal of work? If you want somebody to have developmental experience, how far should you also be second-guessing and intervening in their work with the client; and if you don’t, is the client getting a second-class service?
And another: your organisation’s culture may be to spend a lot of time on training or very little; lots of collegiate responsibility for decisions or very little; writing up for the wider community or not. If you go against organisational culture, you may damage the chances that people will see you as one of the team, and want to work with you.
And then there are relationships. I surely speak for most people who apply OR when I say that good relationships are crucial to successful projects. So when your client, or your manager, says, for example, ‘do not explore that scenario because it will distract us from my focus’; or ‘do not speak to that stakeholder because their opinion is irrelevant’; how much do you argue with them? How far are you willing to go above their head, or behind their back? At what point does your disagreement with their judgement call lead you to disrespect their judgement and undermine them, or worse, to break that relationship altogether. And if the cost of breaking the relationship is no longer working with or for them, no longer bringing the power of OR to bear on the important decisions they are making: is that cost worth the benefit?
Even perfect professionals can be challenged by these, but few of us are perfect, and that brings a whole lot more challenge. Put unkindly, we may be too lazy to fully document a model or to think through alternatives before leaping to conclusions, or too arrogant to listen to others. We may take the ‘expert’ role in order to silence other voices, or be too cowardly to confront clients who misuse our models, or oversell our competence to get the job. But is it laziness or good prioritising and time management? Cowardice or pragmatism? How can we tell where the dividing line falls?
Perhaps this is where Onora O’Neill’s work on trustworthiness comes in4. ‘Trust’ is a great thing for an OR consultant to get from our clients or our research peers. I would go so far as to say that without trust, it would be impossible to do useful OR. But O’Neill points out that for ‘trust’ to be a good thing, the recipient must be ‘trustworthy’; and trustworthiness requires three things: honesty, competence and reliability. One way of reflecting on the honesty of our own behaviour is to ask ourselves: if a client or peer who we respect knew the whole story, would they think we were effective time managers and down-to-earth pragmatists, or would their trust in us be damaged.
These reflections on ethical practice are already more than twice as long as the Code of Conduct, and they only touch on the subject. Which rather supports the conclusion I came to in my talk on this subject at OR59 (available on the Society’s document repository). Ethics of OR is a bit like OR itself: there are plenty of problems, guidance on how to tackle them, suggested solutions – the difficult thing is the implementation. But it is absolutely essential to try. And not just for the sake of the letters after the name.