This isn't specific to startups but it still applies. I was recently asked for advice on how to go from two week sprints to one. The conversation was one I've had several times.
Client: "We are a scrum shop that has two week sprints. We'd like to release faster. Any suggestions?"
Me: "Do you have a QA handoff during the sprint?"
Client: "Sure. We basically do waterfall during the sprint."
Me: "I've got it!"
Me: "Fire your testers."
I'm only half joking.
I used to think having a QA person was the essential fourth technical hire, adding more as needed as the organization grew. For close to ten years that's how I'd managed teams, ensuring each team had access to at least one. That changed last year. We were pushing for faster and faster releases with a client and something didn't feel right. As it happens we were having trouble keeping our QA role filled, the problem wreaking havoc with our release schedule. It needed to stop.
We held a series of meetings to discuss our needs and what could be done to address them. We all agreed we needed tests. We all agreed we needed someone to own testing. We also agreed that devs should own unit tests, but the question of if or how much in the way of integration testing they should do was a matter of intense debate. Should we have a QA that only does spot testing? Should a human ever repeat a test by hand? Should we forgo human testing and instead have a QA Engineer that was chiefly a programmer? If so how could we cleanly divide their work from the other engineers?
It was a lot to process. During this time we plowed through countless testing resources, books, blogs, and tweets. We hit the jackpot when we ran into How Google Tests Software, a great book on how testing evolved during the early days at the Goog and it gave us the answers we were looking for. The sky opened. We had been looking at QA all wrong.
We were initially skeptical at first. I mean when you're used to seeing a net below you when you cross the high wire it's a little unnerving when it's gone, right? Once we realized that having devs own the whole process meant the wire was actually a bridge there was no fear. The need for a safety net was an illusion perpetuated by our own bad behavior.
I'm paraphrasing, but the problem is essentially in thinking that any part of QA is somebody else's job. We weren't so far gone to think that engineers didn't own any of it it but we certainly weren't owning enough. Engineers write a few unit tests and figure that's it. Managers jam a QA person between the engineers and each release and call their job done. The reality is if you want to avoid waterfalls entirely you've got to bake your testing completely into your code effort - not some of the tests, but all of them. The code isn't done until the testing is.
How do you know when you've done it right? You won't need any testers. Having a tester from the get go creates an artificial dependence on someone else to do your testing for you. It also creates an unnecessary step in your release process. Be your own tester first. Separate QA roles should only exist once your QA needs involve a strategic planning component that can no longer be distributed throughout the development team. It depends somewhat on your dev team and your product, but for most places this isn't until the third or fourth year.
Do the work yourself. Design a workflow that requires developers to wipe their own behinds, by writing automated tests for and testing their own code. Your devs make smarter decisions. You can stop paying for people you don’t need. You can finally get the waterfall out of your scrum. I would go so far as to suggest that Continuous Delivery can't be achieved without this approach. You can do without dedicated QA. Start now. Your code, your process, your developers, your timeline, and your budget will thank you for it.