← Back to blog

2026-02-09 • Product • Eu En

Built by People Who Use It

Talk is cheap

It's easy to build a product and claim it works. It's harder to stake your own teaching outcomes on it.

That's exactly what we do. We run Ren Digital Academy - a digital centre that teaches A Level H2 General Paper - entirely on our own platform (yes, it has the same name). Every essay gets graded through Ren. Every piece of feedback our students receive passes through the same system we're selling to schools.

This isn't a demo environment. These are real students, with real exams, and real stakes.

Why we dogfood

The software industry has a term for this: dogfooding - using your own product internally before asking anyone else to. The idea is simple: if you wouldn't use it yourself, why should anyone else?

For us, this forms the foundation for how we build.

A testing ground for new ideas

When we have a hypothesis about a new feature - a different way to surface feedback, a new tagging model, a change to how rubrics are handled - we don't need to wait for a partner school's next assessment cycle. We test it on the next batch of essays coming through the academy. We get signal in days, not months.

A proof of concept that can't be faked

If our goal is to build a product that delivers genuinely personalised, high-quality feedback, then we should be able to show results. And because we run the academy, we have direct access to the data: student performance over time, feedback quality, tutor efficiency. We don't need to ask a third party to trust us - we can show the outcomes ourselves.

Fast iteration, honest feedback

Our tutors are also our most demanding users. They use the product daily and tell us exactly where it falls short. There's no lag between a pain point being discovered and the team hearing about it.

What we're seeing

As we scale the digital centre, a few things have become clear:

It saves tutors a significant amount of time. The first-pass grading and feedback generation means tutors spend their time refining and personalising rather than starting from a blank page. The mechanical work shrinks; the high-value work stays.

Feedback quality goes up, not down. This is the counterintuitive part. You'd expect AI-assisted feedback to be "good enough" - a compromise. Instead, because tutors start from a detailed draft, the final feedback is more thorough and more specific than what they'd produce from scratch under the same time constraints.

What we're still figuring out

We're not going to pretend everything is solved. The UX of the product is changing constantly because the best workflow isn't obvious yet. How should a tutor review AI-generated marks? What's the right level of detail in a feedback draft? When should the system flag low confidence versus just handling it?

These are questions we can only answer by using the product ourselves, observing what works, and iterating fast.

Our mission stays the same

We're building Ren to help teachers - without undermining the quality of education students receive. In fact, we want to do the opposite: to make the feedback students get meaningfully better than the status quo.

Running a digital academy on our own product keeps us honest about that mission. If Ren isn't good enough for our students, it's not good enough for yours.


Want to see the product we use ourselves? Get in touch to book a demo.