The clinical-data-common repository provides a centralized repository for common code used across various clinical data API products, such as clinical-data-gateway-api. It houses various Python sub-modules and Infrastructure as Code (IaC) configurations.
Clone the repository:
git clone https://github.com/NHSDigital/clinical-data-common.git
cd clinical-data-commonThe following software packages, or their equivalents, are expected to be installed and configured:
Note
The version of GNU make available by default on macOS is earlier than 3.82. You will need to upgrade it or certain make tasks will fail. On macOS, you will need Homebrew installed, then to install make, like so:
brew install makeYou will then see instructions to fix your $PATH variable to make the newly installed version available. If you are using dotfiles, this is all done for you.
Install dependencies:
make dependenciesThis package provides common utilities and functions for clinical data API products. It is designed to be imported as a dependency into other projects like clinical-data-gateway-api.
To use clinical-data-common in your project, you can install it using Poetry:
Add the following to your pyproject.toml:
[tool.poetry.dependencies]
clinical-data-common = { git = "https://github.com/NHSDigital/clinical-data-common.git", branch = "main" }Or install via command line:
poetry add git+https://github.com/NHSDigital/clinical-data-common.gitYou can also install a specific Tag/Version:
poetry add git+https://github.com/NHSDigital/[email protected]Once installed, you can import and use the functions from clinical-data-common as follows:
from clinical_data_common import get_hello
# Use the greeting function
greeting = get_hello()
message = f"{greeting}World"
print(message) # Output: Hello, WorldThere are make tasks for you to configure to run your tests. Run make test to see how they work. You should be able to use the same entry points for local development as in your CI pipeline.
make test-unitOr directly with Poetry:
poetry run pytest src/To run tests with code coverage reporting:
poetry run pytest --cov=clinical_data_common --cov-report=term --cov-report=html src/This will generate:
- A terminal coverage report showing coverage percentages
- An HTML coverage report in the
htmlcov/directory
To view the HTML coverage report:
open htmlcov/index.htmlThis project uses SonarCloud for continuous code quality and security analysis. The analysis runs automatically as part of the CI/CD pipeline on every pull request.
SonarCloud analyzes:
- Code quality and maintainability
- Security vulnerabilities
- Code coverage from unit tests
- Code smells and technical debt
The Quality Gate status is displayed at the top of this README. You can view detailed analysis reports on the SonarCloud dashboard.
The test pipeline (.github/workflows/stage-2-test.yaml) automatically:
- Runs unit tests with coverage
- Generates coverage reports in XML format
- Uploads coverage data to SonarCloud
- Performs static code analysis
Coverage reports are sent to SonarCloud and must meet the quality gate thresholds for pull requests to be merged.
The C4 model is a simple and intuitive way to create software architecture diagrams that are clear, consistent, scalable and most importantly collaborative. This should result in documenting all the system interfaces, external dependencies and integration points.
The source for diagrams should be in Git for change control and review purposes. Recommendations are draw.io (example above in docs folder) and Mermaids. Here is an example Mermaids sequence diagram:
sequenceDiagram
User->>+Service: GET /users?params=...
Service->>Service: auth request
Service->>Database: get all users
Database-->>Service: list of users
Service->>Service: filter users
Service-->>-User: list[User]
Most of the projects are built with customisability and extendability in mind. At a minimum, this can be achieved by implementing service level configuration options and settings. The intention of this section is to show how this can be used. If the system processes data, you could mention here for example how the input is prepared for testing - anonymised, synthetic or live data.
Describe or link templates on how to raise an issue, feature request or make a contribution to the codebase. Reference the other documentation files, like
- Environment setup for contribution, i.e.
CONTRIBUTING.md - Coding standards, branching, linting, practices for development and testing
- Release process, versioning, changelog
- Backlog, board, roadmap, ways of working
- High-level requirements, guiding principles, decision records, etc.
Provide a way to contact the owners of this project. It can be a team, an individual or information on the means of getting in touch via active communication channels, e.g. opening a GitHub discussion, raising an issue, etc.
The LICENCE.md file will need to be updated with the correct year and owner
Unless stated otherwise, the codebase is released under the MIT License. This covers both the codebase and any sample code in the documentation.
Any HTML or Markdown documentation is © Crown Copyright and available under the terms of the Open Government Licence v3.0.
