AI and Ethics in Social Value: Help or Hype?

Artificial Intelligence is transforming every aspect of the bid lifecycle, from capture planning to response optimisation. But what does it mean for social value? For APMP professionals bidding across the public sector (and enterprise), AI presents both an opportunity and a dilemma: it can sharpen insights and support response creation, but it also risks making us lazy and upsetting the buyers.
So, how do we use AI wisely and ethically in a space that is focused on human impact?
AI as an enabler, not a replacer
One key takeaway from our roundtable is that AI must support, not supplant, the social value process. At its best, AI helps us to:
- Research local needs: analysing strategies and statistics to help align with buyers’ needs.
- Generate ideas: identifying opportunities aligned to local priorities.
- Scan for good practice: surfacing case studies and benchmarks from the wider market.
- Sense-check content: testing responses against the Social Value Model and buyer expectations.
- Analyse trends: turning delivery data into insight for continuous improvement.
However, without human oversight, AI risks churning out generic answers that lack authenticity, feasibility, or local relevance.
Risks and responsibilities
Using AI in social value raises real concerns:
- Overclaiming: AI can produce polished but unrealistic promises that are impossible to verify or meet in the real world.
- Greenwashing and virtue signalling: Superficial AI-generated responses might look compliant and sound impactful but might ultimately fail to deliver measurable impact.
- Bias in, bias out: If the training data reflects systemic gaps (e.g., geographic or demographic data scarcity), the AI's outputs and recommendations will inherit and amplify those inequities.
- Data ethics: The use of AI in reporting and trend analysis must respect privacy laws and avoid creating unintended consequences for sensitive communities or stakeholders.
- Accountability and auditability: The ‘Black Box Problem’: AI-generated conclusions often lack transparent, human-readable reasoning (explainability). This creates an ‘accountability gap’, making it difficult for auditors, boards, or regulators to trace the model's logic and assign responsibility when a failure or misrepresentation occurs.
What PPN 002/25 Implies
While PPN 002/25 does not yet reference AI explicitly, its emphasis on specificity, feasibility, monitoring, and transparency provides an ethical framework that bid teams can apply to AI-generated content:
- Can we deliver this?
- Can we evidence it?
- Does it align with buyer intent and local needs?
If the answer is no, it does not belong in the bid.
Practical recommendations for bid teams
To use AI effectively and reliably in social value responses:
- Build a trusted content library: Use your chosen platforms to store validated, up-to-date reference material.
- Use detailed prompts: Avoid shallow inputs. Prompt AI with contract specifics, local data, and delivery evidence.
- Create an AI prompt library: Share what works across your team, aligned to the Social Value Model.
- Sense-check with SMEs: Always pair AI outputs with subject matter expertise.
- Offset AI's footprint: Be mindful of carbon emissions linked to extensive AI use and consider how to mitigate.
Toward ethical automation
AI is here to stay, and in the right hands, it can elevate the strategic role of social value in bidding. However, it must be treated as a co-pilot, not an autopilot. The more we pair technology with critical thinking and ethical practice, the more powerful and credible our bids will become.
As bid professionals, the responsibility lies with us: to challenge, validate, and ultimately humanise the AI inputs we use. Social value isn’t just about how fast or efficiently we can write. It’s about commitment to change and delivering true impact.
This blog is based on output from the AI in Social Value Roundtable team at the APMP UK Social Value Roadshow on 18th September.