How we build iconik
This document aims to explain the way we do software development here at iconik. We believe in creating software where the user experience always comes first. This means that we involve user experience experts very early in every major new feature or modification and everyone is encouraged to think about how the thing they are working on fits into the whole.
Since iconik is a SaaS platform which is available over the public internet we also have to make sure everything we do is as secure as possible. Because of this, security aspects of software development is taken into account at every stage of our software lifecycle.
Everything starts with an idea. This can be a request from a customer, an inspiration from a developer or tester, an insight our CEO has while putting his children to bed or something we conclude we have to change during a security incident retrospective. After someone has had this bright idea, we have to do something with it. The first thing that happens is that it has to be written down in a ticket in our boring ticketing system (but it's how we keep track of things).
Our product manager then decides on where in the product roadmap this new idea fits in and schedules it for a rough release window based on the business, technical, operational and other requirements. This scheduling is always subject to change based on how the product backlog evolves.
Next, it's time to start figuring out what we actually want to do. We gather thoughts and requirements from customers, partners and other stakeholders. These are high-level requirements like "The user should be able to import both original and proxy files into Adobe Premiere" rather than "We should have a download button on the left side of the assets page" and we try to keep the requirements this way to give our designers as much freedom to create solutions which work together as a whole. If we were to lock onto small details at an early stage we risk missing the big picture.
The requirements gathered here include both user-related requirements such as "Users should be able to search for assets in the system" as well as technical requirements like "All communication should be encrypted in transit and rest."
Once we have formulated the requirements for a particular feature it's time to hand it over to the design and UX team. Our User Experience specialists are responsible for making sure everything fits together. At this stage we create mockups to be able to see what the user interface will look like and we try to envision how the new feature will fit into workflows our users create.
Based on the design we also determine if any new API endpoints or other backend functionality is needed to support the new functionality. If that is the case then we define the endpoints, their input and output parameters and what the logic when calling the endpoint should be.
The last thing we do as part of the design phase is a security analysis of the proposed new feature. This includes analyzing potential attack vectors as well as highlighting areas of potential problems during implementation. This way we can spot where extra care has to be taken into account so we don't open up security holes. This can be things like how we handle authentication tokens, what type of sensitive or personal data we write to log files, how we execute external commands or other potentially vulnerable areas.
Once the design is done it is time to start the actual implementation. We work in an agile and iterative fashion with two week sprints where the main areas we work on is determined every other Monday morning. Once the sprint planning is done developers get to work on tickets in priority order. We work iteratively during a sprint with a close collaboration between developers and the product manager, which means that as development proceeds, developers have access to the product manager and can ask questions and the product manager can give feedback and steer the direction of development. This leads to faster turnaround and fewer tickets that have to be reopened after the functional review.
This allows the team to focus reasonably efficiently on the tasks they are supposed to accomplish during the two-week period but the product owner and management still has the ability to steer the direction of the team by potentially changing the direction between sprints if necessary.
Developers are responsible for writing tests that cover the new functionality they are developing. These tests become part of our continuous test suite and are run after any change to the code. Automated tests is the main way we make sure older features continue to function when we make changes to the software.
When a developer is finished implementing they send the code off to code review. This is where another developer who is familiar with that part of the code manually looks through the proposed changes to verify that the changes are correct, secure and follow the company coding style. Before any developer even looks at the code we run the test suite and perform a security scan of the code to highlight potentially problematic areas. If any of the tests fail or if there are significant security risks detected by the automated tests then the review is returned to the developer to fix, otherwise it is passed to a reviewer with notes from the automated scan attached to aid in the manual review.
Code review serves a number of different purposes:
- Developers tend to write better code when they know someone else is going to look at the code.
- A second set of eyes can spot problems which the original developer may have become blind to.
- It helps spread knowledge in the team about different areas of the code.
Once the code review is done the new feature is allowed to be merged into the branch for the upcoming release.
When development is finished and the feature is ready for release it is passed over to the Quality Assurance team to verify that it is correctly implemented and to try to find any bugs in the code. Having someone other than the developer testing the new function is important since the tester will make other assumptions about the functionality than the developer. The QA team also have tools for writing automated tests of the entire system. This is where we implement functional tests that cover entire workflows rather than the unit tests which developers normally implement.
Before any code is released to production it is deployed to our staging system which is a replica of the production system. When a release candidate has been deployed to staging we perform another round of manual testing using a pre-defined set of testing instructions. This testing is done before each release of the software. We also perform security scanning of the upcoming release to make sure it doesn't have any vulnerabilities which can be automatically detected. Any discovered vulnerabilities are triaged and fed back into the development lifecycle. Depending on the severity they may be release blockers.
Once a release passes quality assurance it can be released to production. iconik releases can usually be rolled out to production without downtime.
After a new feature has been released we still have to continue to maintain it. This means that we have to make sure it continues to function as we make other changes to the service. The main way we do that is through automated testing. By writing tests when we develop new functionality we can be reasonably sure that we do not break any existing function when we introduce a new change. This also includes tests of our APIs since we have a strong commitment to not change our APIs in a backwards incompatible way.