Cooking App Evaluation Plan

Comprehensive evaluation of learning, planning, and behavioral outcomes for a modern cooking app.

1. FOCUS

What will we evaluate (which program or aspect of a program)?
Effectiveness of the cooking app's learning components (videos, AI feedback, personalization, and community features) in supporting learners' cooking skills, planning, and confidence.

Evaluation Plan Worksheet

Questions Indicators / Evidence Timing Data Collection
1. Are users gaining foundational cooking skills? 1) percent of users demonstrating skill mastery in app assessments.
2) Self-reported confidence in cooking.
3) Types/complexity of meals cooked over time
End of each learning module
3-month follow-up
Sources: Learners, app backend
Methods: In-app quizzes, surveys, image uploads of meals
Sample: All active users
Instruments: Cooking skill rubric, post-module survey
2. Are users making smarter meal planning decisions? # of personalized plans created.
% of meals aligned with dietary goals
Learner reflections on planning success
After using personalization tool for 2 weeks
Mid-program checkpoint
Sources: App logs, user journal entries
Methods: App usage data, journaling
Sample: Users who engage with personalization
Instruments: Meal plan tracker, planning reflection form
3. Do users feel supported and gain immediate help when needed? 1) Frequency of chatbot use
2) number of questions answered via AI
3) Satisfaction ratings of AI/chatbot/community
Monthly reviews Sources: AI chatbot logs, user feedback
Methods: Chatbot data analysis, satisfaction surveys
Sample: All users engaging with support features
Instruments: Support satisfaction survey
4. Are mid-term outcomes like confidence, motivation, and dietary behavior improving? 1) Increase in self-reported motivation
2) % reporting better time management
3) Changes in grocery lists or meal frequency
End of program
6-week follow-up
Sources: Users, app logs
Methods: Survey, optional interviews, data comparison
Sample: All users who completed the program
Instruments: Behavior change survey, follow-up interview guide

2 Evaluation Focus & Questions

1. Focus: What is most important to evaluate and why?
We chose to focus on the effectiveness of the cooking app's learning components (videos, AI feedback, personalization, and community features) in supporting learners' cooking skills, meal planning, and confidence. These are the core instructional features of the program, and evaluating their impact helps us determine whether the app is achieving its primary learning objectives. It also provides insight into how different design elements contribute to learner outcomes, which is essential for iterative improvement.

2. Evaluation Questions

Who might use the evaluation? What do they want to know? How will they use results?
Product designersWhich features improve user learning and engagement?To improve app features and usability
Curriculum developersAre learners achieving the intended cooking and planning skills?To refine the learning modules
Investors/stakeholdersIs the product demonstrating behavior change and long-term impact?To decide on future funding or scale-up
UsersIs the app helping them build cooking confidence and save time/money?To decide whether to continue using the app

What categories of evaluation questions are being addressed?
Process Frequency of chatbot use, types of meals cooked.
Outcome Mastery of cooking skills, meal planning success.
Impact Changes in confidence, dietary behavior, and motivation.

What is the balance of Formative to Summative Evaluation questions and why?
We included a mix of formative (e.g., module-end reviews, monthly chatbot feedback) and summative (e.g., end-of-program surveys, 3-month follow-up) questions. This balance allows us to make real-time improvements while also evaluating overall effectiveness after learners complete the program.

3. Indicators / Evidence

Evaluation Question #1

Indicator(s) Direct? Specific? Useful? Practical? Type Comments
Indicator label here3333QuantCaptures direct performance via built-in quizzes or tasks
Indicator label here2233QualAdds learner perception, though self-report may be biased
Indicator label here2322Mixed (Quant + Qual)Shows skill progression, but may require subjective analysis or photos

Overall adequacy:
Taken together, the indicators provide a well-rounded view of whether users are gaining foundational cooking skills. They combine both performance data and self-perception, which helps triangulate results. While reliance on self-report is a limitation, this is balanced by objective in-app assessments and user behavior logs.

Timing

  • End-of-module check-ins help monitor learning incrementally.
  • Monthly and mid-program checkpoints identify areas for early improvement.
  • End-of-program and follow-up assessments capture retention and behavior change.

Strengths, Weaknesses & Concerns

Overall, we think the evaluation plan does a good job covering the most important areas—like whether users are actually learning to cook, feeling more confident, planning meals better, and getting support when they need it. The indicators we chose are closely tied to what the app is trying to teach, which makes the data more meaningful. That said, we are relying a lot on self-reported data, which could be biased or not fully accurate. Some of the behavior-based indicators, like changes in grocery shopping habits, might also be tricky to measure clearly. We're also a bit concerned about whether users will actually respond to follow-up surveys, and whether we can keep the data collection tools consistent across different parts of the app.

Kirkpatrick Evaluation

Stage/Level Question Details
Level 1: Reaction How did students feel about the learning experience? Was it engaging, clear, and useful? Method: Survey afterwards & open-ended questions
Type: Both qual and quan
Analysis: Quantitative responses will be averaged for satisfaction scores; qualitative comments will be coded for themes like clarity, enjoyment, and perceived relevance.
Level 2: learning What meal planning knowledge and skills did students gain? Method: Pre/post assessment comparing understanding of budgeting, dietary balance, and time-saving strategies.
Type: Quantitative
Analysis: Compare pre/post quiz scores; use statistics to assess learning gains.
Level 3: Behavior (transfer of training) Are students applying meal planning strategies in their daily life? Method: Follow-up reflection + screenshots or logs of actual meal plans or cooking attempts 2–3 weeks later.
Type: Qualitative
Analysis: Identify behavior changes like healthier choices, time/budget awareness, or consistent meal prepping.
Level 4: Results Has the cooking experience improved students' independence, health habits, or food spending? Method: Self-report survey + optional budget tracking over a month.
Type: Both qual and quan
Analysis: Look for trends in reported food expenses, cooking frequency, and self-efficacy. Cross-check with student reflections or usage data from the cooking assistant platform.