Ai Says


Project: Team Health Check

Team members – Sam, Anita, Ellie, Divya, Zoe, Jason and Bijal

Sprint 1 – Create and Launch MVP with 3 Questions

Sprint Goal:

Create and launch a Minimum Viable Product (MVP) containing 3 initial assessment questions.

Completed Items (Done):

Tested functionality with one sample question.

Secured data handling — ensured no public data leaks.

Generated QR code and hyperlink for user access.

Created a basic homepage layout.

Defined initial question themes and metrics.

Designed “Zen Meter” name and tagline.

Developed a PDF output option for results.

Mapped initial user flows and journey.

Considered optional add-ons for team coaching and reflection.

Testing Summary:

Focused on validating technical functionality for question flow and data submission.

Verified QR code functionality and result generation.

Ensured secure handling of user data.

Insights / Lessons Learned:

Core functionality works reliably.

User flow clarity needed improvement for non-technical testers.

Naming and branding sparked positive team engagement.

This image has an empty alt attribute; its file name is image.png

Sprint 2 – Test Themes Functionality and Create Final 24 Questions

Sprint Goal:

Test the “themes” concept and finalize the full 24-question set.

This image has an empty alt attribute; its file name is image-1.png

Completed Items (Done):

Tested multiple teams using the tool simultaneously.

Implemented traffic light scoring (visual health indicators).

Replaced “Maturity” language with “Health.”

Created three main “theme” categories for reporting.

Tested question replacement capability.

Introduced a score breakdown in reports.

Fixed PDF sharing bug.

Promoted Scrum Alliance links and messaging.

Added team name to reports for better traceability.

Finalized question sets for “People & Behavior,” “Process & Practices,” and “Technical Foundations.”

Testing Summary:

Comprehensive multi-team validation of report generation.

Theme testing confirmed report readability and scoring logic.

Bug testing revealed and resolved PDF sharing issues.

Insights / Lessons Learned:

Themes improve result interpretation.

Need for enhanced visual feedback in traffic lights.

Collaboration across multiple teams validated scalability.

This image has an empty alt attribute; its file name is image-2.png

Sprint 3 – Complete User Testing and Feedback of Full 24Q Assessment

Sprint Goal:

Conduct full user testing and gather feedback on all 24 questions.

Completed Items (Done):

Uploaded and tested all 24 questions.

Ran multiple user testing sessions.

Adjusted question wording and headings (“Zen Meter” updates).

Added comments and reflections to each question.

Improved overall visual layout and graph readability.

Promoted the tool more broadly within pilot teams.

Created thematic feedback categories (“Agility Meets Stability,” “Empowering Teams,” etc.).

Testing Summary:

Gathered structured feedback from end users.

Updated interface and readability based on user input.

Introduced qualitative feedback mechanisms per question.

Insights / Lessons Learned:

Users appreciated the contextual feedback option.

Graph readability improved but still required optimization.

Naming consistency across screens enhanced user trust.

Sprint 4 – Roadtest Quality of AI Analysis

Sprint Goal:

Test and validate the quality and accuracy of AI-driven analysis.

This image has an empty alt attribute; its file name is image-3.png

Completed Items (Done):

Added text comment fields for richer insights.

Integrated AI-based report generation for feedback summaries.

Conducted user journey review with test teams.

Refined hover definitions for report terms.

Tested AI commentary against human feedback (5 expert users).

Highlighted key AI insights and confidence scoring.

Merged duplicate questions and streamlined wording.

Improved leadership and empowerment metrics.

Consolidated knowledge base for training and calibration.

Testing Summary:

Compared AI-generated analysis versus manual interpretation.

Validated consistency and reliability of automated insights.

Gathered user perception data on AI clarity and helpfulness.

Insights / Lessons Learned:

AI summaries effectively captured trends but missed nuanced context.

More training data required for leadership-related insights.

Clear labeling of AI-generated content improved trust.

Feedback loop established for continuous AI improvement.

Overall Project Learnings

Incremental, test-driven sprint structure enabled stable evolution.

Clear thematic structuring (“People,” “Process,” “Tech”) ensured coverage.

Early user testing surfaced critical UX improvements.

AI integration added analytical depth but required calibration.

Cross-team feedback cycles boosted engagement and accuracy.