Cautionary Tales
Generative AI has been with us for approximately just over two years now. In this time, in bidding, I have seen it used responsibly by some, incorrectly by others, and irresponsibly by too many. AI is simply a tool, and like any tool it must be used in the right way, or (arguably) not at all, otherwise the wrong use of any tool can end up injuring yourself. With that rather torturous metaphor, I have outlined below some anonymised terrible use cases of AI as cautionary tales – some of them from BidVantage, and others I have heard from colleagues elsewhere:
· Tale 1: A start-up social care company thought they could write a bid for a council framework using just ChatGPT. Unsurprisingly, they did not get on the framework. They asked a Bid Consultant friend of mine to review their answers after the result to understand what went wrong. It was quite simple – generic answers that were disjointed and barely answered the questions. As a side note, there were a lot of Zs and Americanisms throughout their answers.
· Tale 2: A client relayed to us that they had uploaded various documents and data to an AI platform. When we asked them if the platform was safe and would not share their data to train the model, the client was first confused, and then worried! Unfortunately for the client, their data was being used to train a publicly-available model. There were information governance consequences for this, as well as the potential for commercially-sensitive information being shared in the public domain.
· Tale 3: We were asked to review a soon-to-be submitted bid at short notice. The client said that the answers had been reviewed internally, and they believed they would score maximum or near maximum marks for each. However, the Chief Executive was suspicious of this internal scoring and wanted to use us for a second opinion. Once on the project, we learned that the answers had been reviewed internally simply using AI! We asked to see the outputs of this and were dismayed by what we saw. The AI did not have the information, intelligence, or nuance to understand how to score the answer using the evaluation criteria. After our review, we scored every answer 1-3 marks (out of 5) below what the AI had said.
· Tale 4: AI-models hallucinate, lie, and make false promises. This was more the case two years ago, but it still happens fairly regularly. When it comes to writing a bid, this is a bit of a problem! A friend of mine working in recruitment told me of a bid their assistant had put together for an important preferred supplier agreement. When my friend reviewed the bid, he found it well written, but full of information that simply was not true. When they asked the assistant about it, they admitted they had written some of it using ChatGPT.
Responsible Use (or not at all)
So what was the point of each of these tales? Am I trying to scare people away from using AI and instead spend money on using humans like us at BidVantage? Not in the slightest. The key point is that if you use AI, or any other business tool, use it responsibly and with your eyes wide open.
Key tips if you are using AI:
· Develop an AI-use Policy – so that everyone in your organisation understand what to use it for, how to use it, when to use it, and why.
· Find a trusted and safe model/platform/programme.
· Check data security and information governance implications.
· Understand how AI prompting works and how to get the best results.
· CHECK EVERYTHING the AI produces for you.