Software Development Life Cycle (SDLC)
1. Purpose and Objective
The Software Development Life Cycle (SDLC) outlines the typical process used by DevOpsSystems GmbH to plan, develop, test, deliver, and maintain its software products.
The aim is to continuously promote quality, security, traceability, and improvement throughout all development activities related to these products.
Scope
This document applies to all software products developed, maintained, or supported internally by DevOpsSystems GmbH – regardless of whether they are provided as cloud-based (Forge) or on-premises (Data Center) solutions.
2. Core Principles
Our development process is based on agile methods and combines them with a DevOps-oriented approach, aiming to create a close alignment between development, operations, support, and quality assurance.
We place great importance on short communication paths, high transparency, and automated processes to ensure efficient and reliable work – even as a small team of 1 to 5 people.
Agile Approach
Our development approach is generally inspired by agile methodologies and follows an incremental process. Requirements are recorded and prioritized as issues in Jira. Each product increment is intended to conclude with a functional intermediate version that can be tested and reviewed. This approach enables quick adjustments based on customer feedback or new insights and supports continuous improvement.
Regular retrospectives help us to continuously improve both our working methods and the quality of our products.
DevOps-Mindset
We strive for a high level of automation across the entire toolchain – from build and test processes to deployment and documentation.
The goal is to ensure stable releases while minimizing the effort required for manual tasks. Development, testing, deployment, and operations (only if DevOpsSystems GmbH is the operator) are viewed as a shared responsibility.
Security and Quality Orientation
Security and quality are considered throughout all phases of the software life cycle whenever possible.
The intention is to evaluate security-related aspects early in the planning stage (“Security by Design”).
Tools for code analysis help ensure code quality, readability, and maintainability.
Tools for identifying security vulnerabilities support the early detection and remediation of potential risks.
Changes are ideally aligned with standardized processes to promote high quality and traceability.
Collaboration and Transparency
Our collaboration is characterized by openness, trust, and mutual support. We aim to document key work results – such as code, documentation, test reports, and decisions – in version control or documentation systems as transparently as possible.
This approach is intended to help ensure that all stakeholders remain informed about the current project status and context, and that changes can be traced in a clear and transparent manner.
Customer Collaboration
We aim to actively involve our customers in the development process. Feedback received through service management, email, or communication in customer projects is incorporated into our planning and helps us to further develop our products in a practical and focused manner.
3. Tools and Systems
Our tool landscape is designed to support efficiency, transparency, and traceability, helping us to cover the entire software development life cycle.
Category | Tool / System | Purpose |
|---|---|---|
Project Management | Jira | Managing requirements, tasks, and workflows |
Documentation | Confluence | Technical documentation, architecture, process descriptions, and product documentation |
Support & Incident Management | Atlassian Service Management | Ticketing for support requests, malfunctions, and general communication |
Version Control | Git | Managing source code, versioning, branching, merging, and ensuring traceability of changes |
Code Management | Bitbucket | Managing repositories, pull requests, permissions, code reviews, and CI/CD integrations |
Code Quality & Security | SonarQube, Linters for relevant technologies, Snyk / OWASP Tools + several linter depending on the technology used | Monitoring code quality, detecting code smells, ensuring compliance with standards, and analyzing dependencies and potential security vulnerabilities |
Testing | Several test frameworks depending on the technology used | Performing unit, integration, and end-to-end tests, UI-tests |
CI/CD | Bitbucket Cloud / Jenkins | Managing build, test, and deployment processes to support continuous delivery |
Artifact Management | Nexus | Storing, versioning, and managing build artifacts such as libraries and packages |
Integrated Development Environment | VS Code and IntelliJ IDEA | Local development and debugging environments for various programming languages and frameworks. The goal is to make all quality-related information available directly within the development environment. |
4. Phases of the Software Development Life Cycle
This chapter outlines the phases of the development process — from the initial idea through implementation to maintenance. Each phase contributes to ensuring quality, stability, and security and is defined by clear responsibilities and workflows.
Phase | Tools Used | Description |
|---|---|---|
Planning | Jira, Confluence | Capturing, evaluating, and prioritizing requirements (functionality, security, quality, and objectives) |
Design | Confluence, Markdown, http://draw.io | Technical and conceptual design of the software, including security and interface design |
Development | VS Code, IntelliJ IDEA, Git, Bitbucket | Implementation of the defined requirements in source code |
Quality Assurance | Bitbucket, SonarQube, Snyk, OWASP, various linters, Test suites | Verifying that the software meets the defined requirements (functional, non functional, technical and security-related) |
Deployment | Bitbucket, Jenkins, Nexus, Atlassian Marketplace | Delivering software to end users or production systems in a controlled manner |
Maintenance | Jira Service Management and all other tools | Communication, continuous maintenance, bug fixing, further development, and optimization |
4.1 Planning
This phase lays the foundation for each project by collecting, evaluating, and prioritizing requirements.
We maintain a central product backlog that is continuously updated with new ideas, requirements, improvements, and known bugs.
From this product backlog, the goal is to create a release backlog that defines the content of a planned version or release.
A planned release may include new features, optimizations, and non-critical or minor bug fixes. (Hotfixes are handled separately; see the Maintenance section.)
The release backlog results from team-based prioritization according to effort, benefit, and risk.
The release backlog serves as the starting point for the subsequent phases of the development process.
4.2 Design
In this phase, the technical foundation of the software is established. The goal is to create a robust, secure, and scalable architecture that allows for future extensions and enhancements.
Key architectural, interface, and design decisions can be documented in Confluence wherever possible.
Security aspects (e.g., OWASP Top 10) are ideally considered already at this stage of the development process.
4.3 Development
This phase covers the actual implementation of the requirements and forms the core of the entire software development process. The goal is to create stable, maintainable, and secure software that meets the defined requirements and can be operated reliably over the long term. We aim to keep all development activities transparent and traceable, using version control and working closely with established workflows, testing, and quality assurance processes.
We develop using Git and follow the Trunk-Based Development workflow.
All behavior-changing code modifications are carried out according to the principles of Feature-Driven Development (FDD).
For cloud products, separate staging and production branches may be used to enable controlled testing and deployment.
Our goal is for code changes to be submitted through pull requests
Whenever possible, each pull request should be reviewed by at least one additional team member.
We plan to use linters to assist developers in maintaining consistent code quality and style during implementation.
Ideally, unit tests are written for every code change to verify correctness and prevent regressions.
Teams aim to integrate vulnerability checks for dependencies directly into their development environments to help identify potential security issues early.
SonarQube ideally checks code quality, security aspects, and code smells directly within the development environment during the development phase
Jira tickets can be linked to commits, branches, and pull requests to ensure complete traceability throughout the process.
4.4 Quality Assurance
Our quality assurance aims to ensure that each change meets the defined requirements and does not cause unintended side effects. The primary goal is to detect issues as early as possible and maintain a high level of software quality and stability.
Quality assurance usually begins in the early stages of development. The aim of tests and code analyses is to detect potential issues early on and increase confidence in the reliability of each change.
Testing
We distinguish the following types of tests as part of our quality assurance process:
Unit Tests: Verification of individual functions or modules to ensure correct operation.
Integration Tests: Ensuring that multiple components interact correctly with one another.
System Tests: Testing the overall system under realistic conditions (e.g., using test data or simulated interfaces).
UI Tests: Automated verification of user interface behavior and interaction, ensuring that workflows and visual components function as expected across supported environments.
In addition, we place particular focus on the following aspects:
Regression: Automated repetition of tests to ensure that new changes do not affect existing functionality.
Code Coverage: As part of automated testing, we aim to continuously measure code coverage to assess the level of test completeness.
Code Analysis
We distinguish the following types of code analyses as part of our quality assurance process:
Static Application Security Testing (SAST): Analysis of source code to identify potential security risks.
Software Composition Analysis (SCA): Review of external dependencies and open-source components for known vulnerabilities.
Dynamic Application Security Testing (DAST): Analysis of running applications to detect vulnerabilities. (Planned for 2026.)
Automation
Tests and code analyses are largely automated and integrated into the CI/CD pipeline. The goal is for each code commit or pull request to automatically trigger relevant tests. This helps ensure that faulty or insecure changes are not introduced into the main development line. If issues are detected, builds may be stopped and the responsible teams notified. Test results can be documented in the respective systems (e.g., CI systems) and can be considered during the next planning phase if necessary.
We aim to minimize manual testing to ensure consistency, repeatability, and efficiency. Where appropriate, additional manual tests can be planned and documented to complement the automated coverage.
Bug Bounty Programs
To complement internal security measures, DevOpsSystems GmbH may also participate in bug bounty programs when appropriate. The goal is to identify potential vulnerabilities early through independent expertise and to continuously enhance product security. Any identified issues are reviewed, prioritized, and handled through the standard development and maintenance process.
4.5 Deployment
Deployment describes the planned process by which a tested and approved software version can be delivered to a target environment (e.g., staging, production, or Nexus, or Marketplace). The goal is to ensure that the software is deployed in a controlled, reproducible, and traceable manner — regardless of whether it is operated as a cloud service or an on-premises solution.
Deployment Strategy
We follow a Continuous Delivery approach. This means that after every successful build and all passing tests, each version is potentially ready for production use.
However, the actual deployment is carried out manually by authorized team members to provide a final level of control and validation.
Deployment Types
Cloud Products:
Deployments are typically carried out through centralized automation within a defined cloud environment (e.g., Atlassian Forge). Separate staging and production environments are used to validate changes in a controlled manner before they go live. Automation scripts are intended to help make deployments consistent and reproducible.
On-Premise Products:
Deployment is handled via an artifact management system or an official marketplace (e.g., Atlassian Marketplace). Installable packages or containers are provided, which can be installed manually by our team or by the customer. This approach allows flexibility across different customer environments while supporting internal quality and security standards.
Traceability
Each deployment is versioned and assigned a specific release number. Version details and changes are documented so that it can be traced at any time:
which version was deployed, and
which changes were included.
Release Documentation
After a successful deployment, a release entry is typically created in the documentation.
This entry usually includes:
the version number,
a description of the implemented changes, and
information about any specific notes or limitations.
4.6 Maintenance
After deployment, the active phase of maintaining and supporting the software begins. The goal is to ensure stable and secure operation, respond efficiently to reported issues, and support continuous improvement through systematic analysis.
Error Handling and Prioritization
All reported or identified issues are, where possible, recorded in Jira as issues of type Bug and assigned a severity level. The severity level depends on the impact on functionality or security.
Prioritization generally follows this principle:
Critical Issues (Hotfix):
Issues that are security-relevant or significantly affect productive operation are addressed immediately and resolved in a dedicated hotfix release. Such a release contains only the necessary corrections to quickly restore functionality or security. The hotfix is typically prioritized for implementation, tested, documented, and released after successful validation.
Non-Critical Issues:
Issues without security- or business-critical impact are prioritized and included in the next planned release. These bugs follow the standard development, testing, and release process and are scheduled in the product backlog of a future release to ensure stable and traceable integration into the product.
Product Care
Regular maintenance activities help ensure that the software remains stable, secure, and up to date over the long term.
Typical activities include:
updating libraries, frameworks, and dependencies,
addressing technical debt,
improving performance and compatibility, and
adapting to new system environments or customer requirements.
Where appropriate, the results of these activities are documented and considered during future release planning.
Customers and Reporting Channels
Customers are essential to our maintenance activities and form a central part of our quality and continuous improvement process. To report issues, suggest improvements, or share security-related information, the Jira Service Management system is available as the primary communication and tracking channel.
End of Maintenance
It is planned to align our products with the Atlassian Long-Term Support (LTS) versions. For on-premises products, the goal is to maintain compatibility with the two most recent LTS versions of the corresponding Atlassian host product (e.g., Bitbucket). In the cloud, the latest version is typically maintained and continuously updated.