From 5ca7a55471e74dc0d6ed7c4a028bb93d50a83158 Mon Sep 17 00:00:00 2001 From: <> Date: Thu, 8 Aug 2024 13:02:35 +0000 Subject: [PATCH] Deployed 2942e85 with MkDocs version: 1.6.0 --- .nojekyll | 0 404.html | 2464 ++++++ .../Developer Reference/index.html | 2510 ++++++ .../Infrastructure/ADP Portal/index.html | 2983 ++++++++ .../ASO Helm Library Chart/index.html | 4760 ++++++++++++ .../index.html | 2651 +++++++ .../Quality-Assurance-Overview/index.html | 2796 +++++++ .../secret-management/index.html | 2616 +++++++ .../adp-portal/ADP Portal/index.html | 2515 ++++++ .../adp-data-plugin/index.html | 3032 ++++++++ .../backstage-plugin-index/index.html | 2808 +++++++ .../backstage-setup/index.html | 2611 +++++++ .../catalog-data-sources/index.html | 2783 +++++++ .../github-app-permissions/index.html | 2978 ++++++++ .../govuk-branding/index.html | 2626 +++++++ .../github/pull_request_template/index.html | 2560 +++++++ .../index.html | 2685 +++++++ .../fcp-demo-services/overview/index.html | 3049 ++++++++ .../index.html | 2647 +++++++ .../onboarding-a-delivery-project/index.html | 2663 +++++++ Getting-Started/onboarding-a-user/index.html | 2643 +++++++ .../how-to-create-a-database/index.html | 2793 +++++++ .../index.html | 2961 ++++++++ .../how-to-create-a-system/index.html | 2604 +++++++ .../index.html | 2860 +++++++ .../how-to-create-acceptance-test/index.html | 2856 +++++++ .../how-to-create-performance-test/index.html | 2843 +++++++ .../migrate-a-delivery-project/index.html | 2646 +++++++ .../migrate-a-platform-service/index.html | 2609 +++++++ .../migrate-production-data/index.html | 2544 +++++++ .../adp-portal/adp-copilot/index.html | 2975 ++++++++ .../adp-portal/adp-portal-testing/index.html | 2582 +++++++ .../application-hosting/index.html | 2584 +++++++ .../azure-service-operator-for-aks/index.html | 2983 ++++++++ .../common-pipelines/index.html | 2742 +++++++ .../application-deployments/index.html | 2632 +++++++ .../flux-configuration/index.html | 2954 ++++++++ .../gitops-for-aks/overview/index.html | 2711 +++++++ .../repository-setup/index.html | 2933 +++++++ .../infrastructure-pipelines/index.html | 2670 +++++++ .../index.html | 2770 +++++++ .../istio-service-mesh-poc/index.html | 2921 +++++++ .../microservices-and-aks/index.html | 2670 +++++++ .../monitoring/alerts/index.html | 2747 +++++++ .../index.html | 2596 +++++++ .../monitoring/network-watcher/index.html | 2643 +++++++ .../monitoring/overview/index.html | 2710 +++++++ .../secrets-and-configuration/index.html | 2838 +++++++ .../architecture-overview/index.html | 2549 +++++++ Platform-Architecture/environments/index.html | 2758 +++++++ .../dynamics-and-platform-platform/index.html | 2767 +++++++ .../integration-patterns/overview/index.html | 2545 +++++++ .../permissions-model/index.html | 3205 ++++++++ .../ai-services/index.html | 3295 ++++++++ .../data-services/index.html | 2674 +++++++ .../integration-services/index.html | 2634 +++++++ Platform-Architecture/scaling/index.html | 2704 +++++++ Platform-Architecture/tech-radar/index.html | 3121 ++++++++ .../adp-platform-strategy/index.html | 2749 +++++++ .../documentation-approach/index.html | 2578 +++++++ .../service-deployment-strategy/index.html | 2732 +++++++ .../service-versioning-strategy/index.html | 2727 +++++++ assets/images/favicon.png | Bin 0 -> 1870 bytes assets/javascripts/bundle.ad660dcc.min.js | 29 + assets/javascripts/bundle.ad660dcc.min.js.map | 7 + assets/javascripts/lunr/min/lunr.ar.min.js | 1 + assets/javascripts/lunr/min/lunr.da.min.js | 18 + assets/javascripts/lunr/min/lunr.de.min.js | 18 + assets/javascripts/lunr/min/lunr.du.min.js | 18 + assets/javascripts/lunr/min/lunr.el.min.js | 1 + assets/javascripts/lunr/min/lunr.es.min.js | 18 + assets/javascripts/lunr/min/lunr.fi.min.js | 18 + assets/javascripts/lunr/min/lunr.fr.min.js | 18 + assets/javascripts/lunr/min/lunr.he.min.js | 1 + assets/javascripts/lunr/min/lunr.hi.min.js | 1 + assets/javascripts/lunr/min/lunr.hu.min.js | 18 + assets/javascripts/lunr/min/lunr.hy.min.js | 1 + assets/javascripts/lunr/min/lunr.it.min.js | 18 + assets/javascripts/lunr/min/lunr.ja.min.js | 1 + assets/javascripts/lunr/min/lunr.jp.min.js | 1 + assets/javascripts/lunr/min/lunr.kn.min.js | 1 + assets/javascripts/lunr/min/lunr.ko.min.js | 1 + assets/javascripts/lunr/min/lunr.multi.min.js | 1 + assets/javascripts/lunr/min/lunr.nl.min.js | 18 + assets/javascripts/lunr/min/lunr.no.min.js | 18 + assets/javascripts/lunr/min/lunr.pt.min.js | 18 + assets/javascripts/lunr/min/lunr.ro.min.js | 18 + assets/javascripts/lunr/min/lunr.ru.min.js | 18 + assets/javascripts/lunr/min/lunr.sa.min.js | 1 + .../lunr/min/lunr.stemmer.support.min.js | 1 + assets/javascripts/lunr/min/lunr.sv.min.js | 18 + assets/javascripts/lunr/min/lunr.ta.min.js | 1 + assets/javascripts/lunr/min/lunr.te.min.js | 1 + assets/javascripts/lunr/min/lunr.th.min.js | 1 + assets/javascripts/lunr/min/lunr.tr.min.js | 18 + assets/javascripts/lunr/min/lunr.vi.min.js | 1 + assets/javascripts/lunr/min/lunr.zh.min.js | 1 + assets/javascripts/lunr/tinyseg.js | 206 + assets/javascripts/lunr/wordcut.js | 6708 +++++++++++++++++ .../workers/search.b8dbb3d2.min.js | 42 + .../workers/search.b8dbb3d2.min.js.map | 7 + assets/stylesheets/main.6543a935.min.css | 1 + assets/stylesheets/main.6543a935.min.css.map | 1 + assets/stylesheets/palette.06af60db.min.css | 1 + .../stylesheets/palette.06af60db.min.css.map | 1 + images/ADP Tools Landscape.png | Bin 0 -> 454818 bytes images/Import-secrets-to-Key-Vault.png | Bin 0 -> 74723 bytes images/adp-create-delivery-programme.png | Bin 0 -> 130532 bytes images/adp-create-delivery-project.png | Bin 0 -> 130439 bytes images/adp-data-high-level-process-flow.png | Bin 0 -> 112314 bytes images/adp-data-portal-permissions.png | Bin 0 -> 63058 bytes images/adp-data.png | Bin 0 -> 135711 bytes images/adp-tech-radar.png | Bin 0 -> 179963 bytes images/adp-view-delivery-programme.png | Bin 0 -> 91780 bytes images/adp-view-delivery-project.png | Bin 0 -> 90566 bytes images/adp-view-edit-delivery-programme.png | Bin 0 -> 144455 bytes images/adp-view-edit-delivery-project.png | Bin 0 -> 141967 bytes images/aks-and-microservices.png | Bin 0 -> 218707 bytes ...ervices-advanced-production-deployment.png | Bin 0 -> 162553 bytes images/android-chrome-192x192.png | Bin 0 -> 24327 bytes images/android-chrome-512x512.png | Bin 0 -> 79074 bytes images/appconfig.png | Bin 0 -> 70837 bytes images/apple-touch-icon.png | Bin 0 -> 21564 bytes images/application-hosting.png | Bin 0 -> 138678 bytes images/aso-setup.png | Bin 0 -> 184735 bytes images/config-and-secrets.png | Bin 0 -> 406116 bytes images/config-structure.png | Bin 0 -> 40761 bytes images/creation-of-service.png | Bin 0 -> 67352 bytes images/delivery-project-id.png | Bin 0 -> 5179 bytes images/demo-business-context.png | Bin 0 -> 72492 bytes images/demo-microservice-architect-2.png | Bin 0 -> 157377 bytes images/demo-microservice-architecture.png | Bin 0 -> 371996 bytes images/diagrams/adp-copilot.png | Bin 0 -> 154009 bytes images/diagrams/ai-services-0.1.png | Bin 0 -> 167672 bytes images/diagrams/ai-services.png | Bin 0 -> 195779 bytes images/diagrams/portal-db.png | Bin 0 -> 163127 bytes images/documentation-approach.jfif | Bin 0 -> 105905 bytes images/favicon-16x16.png | Bin 0 -> 636 bytes images/favicon-32x32.png | Bin 0 -> 1631 bytes images/favicon-big.png | Bin 0 -> 102938 bytes images/favicon.ico | Bin 0 -> 15406 bytes images/favicon.png | Bin 0 -> 31831 bytes images/flux-dashboard.png | Bin 0 -> 143100 bytes images/gitops-for-aks.png | Bin 0 -> 85618 bytes images/helm-chart-values.png | Bin 0 -> 111633 bytes images/import-appconfig.png | Bin 0 -> 107539 bytes images/infra-repos.png | Bin 0 -> 122113 bytes images/istio-architecture.png | Bin 0 -> 32636 bytes images/istio-installation.png | Bin 0 -> 121501 bytes images/istio-mutual-tls.png | Bin 0 -> 97120 bytes images/istio-permissive-mtls.png | Bin 0 -> 106784 bytes images/istio-strict-mtls.png | Bin 0 -> 93083 bytes images/jeegerui.png | Bin 0 -> 111846 bytes images/kaili.png | Bin 0 -> 201527 bytes images/keyvault-secretes.png | Bin 0 -> 12289 bytes images/logos/aisearch.png | Bin 0 -> 12126 bytes images/logos/openai.png | Bin 0 -> 240020 bytes images/managed-prometheus-dashboard.png | Bin 0 -> 177642 bytes images/mointor-cluster.png | Bin 0 -> 266168 bytes images/monitor-containers.png | Bin 0 -> 75160 bytes images/monitor-grafana.png | Bin 0 -> 141321 bytes images/monitor-insights-nodes.png | Bin 0 -> 47004 bytes images/montior-insights.png | Bin 0 -> 107123 bytes images/network-watcher.png | Bin 0 -> 90548 bytes images/objectives-adp.png | Bin 0 -> 200500 bytes images/pipeline-layered-delivery.png | Bin 0 -> 193469 bytes images/pipeline-parameters.png | Bin 0 -> 85799 bytes images/pipeline-run-complete.png | Bin 0 -> 67108 bytes images/pipeline-screenshot.png | Bin 0 -> 20023 bytes images/post-deployment-trigger-design.png | Bin 0 -> 219523 bytes images/project-migration-stages.PNG | Bin 0 -> 192311 bytes images/project-migration-timeline.PNG | Bin 0 -> 100284 bytes images/qa/ADP QA Testing Pyramid.png | Bin 0 -> 29740 bytes .../qa/JMeter-How-Use-Variables-In-Script.png | Bin 0 -> 68344 bytes .../qa/JMeter-Perf-Test-Set-DefaultValues.png | Bin 0 -> 71830 bytes images/recommended-alerts.png | Bin 0 -> 96473 bytes images/run-appconfig.png | Bin 0 -> 23363 bytes images/run-pipeline.png | Bin 0 -> 35383 bytes images/site.webmanifest | 1 + images/variable-group-keyvalue.png | Bin 0 -> 17077 bytes images/variable-group.png | Bin 0 -> 13683 bytes images/vision-board.png | Bin 0 -> 206286 bytes images/waiting-stage.png | Bin 0 -> 30301 bytes images/yaml-pipeline.png | Bin 0 -> 186672 bytes index.html | 2746 +++++++ javascripts/loader.js | 333 + javascripts/tablesort.js | 6 + search/lunr.js | 3475 +++++++++ search/main.js | 109 + search/search_index.json | 1 + search/worker.js | 133 + sitemap.xml | 3 + sitemap.xml.gz | Bin 0 -> 127 bytes techdocs_metadata.json | 1 + test/index.html | 2605 +++++++ 195 files changed, 186498 insertions(+) create mode 100644 .nojekyll create mode 100644 404.html create mode 100644 Developer-Reference/Developer Reference/index.html create mode 100644 Developer-Reference/Infrastructure/ADP Portal/index.html create mode 100644 Developer-Reference/Infrastructure/ASO Helm Library Chart/index.html create mode 100644 Developer-Reference/Infrastructure/helm-library-memory-and-cpu-tiers/index.html create mode 100644 Developer-Reference/Quality-Assurance/Quality-Assurance-Overview/index.html create mode 100644 Developer-Reference/Secret Management/secret-management/index.html create mode 100644 Developer-Reference/adp-portal/ADP Portal/index.html create mode 100644 Developer-Reference/adp-portal/ongoing-development/adp-data-plugin/index.html create mode 100644 Developer-Reference/adp-portal/ongoing-development/backstage-plugin-index/index.html create mode 100644 Developer-Reference/adp-portal/ongoing-development/backstage-setup/index.html create mode 100644 Developer-Reference/adp-portal/ongoing-development/catalog-data-sources/index.html create mode 100644 Developer-Reference/adp-portal/ongoing-development/github-app-permissions/index.html create mode 100644 Developer-Reference/adp-portal/ongoing-development/govuk-branding/index.html create mode 100644 Developer-Reference/github/pull_request_template/index.html create mode 100644 Developer-Reference/github/verify-gitHub-commit-signatures/index.html create mode 100644 Developer-Reference/reference-applications/fcp-demo-services/overview/index.html create mode 100644 Getting-Started/onboarding-a-delivery-programme/index.html create mode 100644 Getting-Started/onboarding-a-delivery-project/index.html create mode 100644 Getting-Started/onboarding-a-user/index.html create mode 100644 How-to-guides/Platform-Services/how-to-create-a-database/index.html create mode 100644 How-to-guides/Platform-Services/how-to-create-a-platform-service/index.html create mode 100644 How-to-guides/Platform-Services/how-to-create-a-system/index.html create mode 100644 How-to-guides/Platform-Services/how-to-deploy-a-platform-service/index.html create mode 100644 How-to-guides/Testing/how-to-create-acceptance-test/index.html create mode 100644 How-to-guides/Testing/how-to-create-performance-test/index.html create mode 100644 Migrate-to-ADP/migrate-a-delivery-project/index.html create mode 100644 Migrate-to-ADP/migrate-a-platform-service/index.html create mode 100644 Migrate-to-ADP/migrate-production-data/index.html create mode 100644 Platform-Architecture/adp-portal/adp-copilot/index.html create mode 100644 Platform-Architecture/adp-portal/adp-portal-testing/index.html create mode 100644 Platform-Architecture/architectural-components/application-hosting/index.html create mode 100644 Platform-Architecture/architectural-components/ci-cd-and-automation/azure-service-operator-for-aks/index.html create mode 100644 Platform-Architecture/architectural-components/ci-cd-and-automation/common-pipelines/index.html create mode 100644 Platform-Architecture/architectural-components/ci-cd-and-automation/gitops-for-aks/application-deployments/index.html create mode 100644 Platform-Architecture/architectural-components/ci-cd-and-automation/gitops-for-aks/flux-configuration/index.html create mode 100644 Platform-Architecture/architectural-components/ci-cd-and-automation/gitops-for-aks/overview/index.html create mode 100644 Platform-Architecture/architectural-components/ci-cd-and-automation/gitops-for-aks/repository-setup/index.html create mode 100644 Platform-Architecture/architectural-components/ci-cd-and-automation/infrastructure-pipelines/index.html create mode 100644 Platform-Architecture/architectural-components/ci-cd-and-automation/naming-conventions-and-structures/index.html create mode 100644 Platform-Architecture/architectural-components/istio-service-mesh-poc/index.html create mode 100644 Platform-Architecture/architectural-components/microservices-and-aks/index.html create mode 100644 Platform-Architecture/architectural-components/monitoring/alerts/index.html create mode 100644 Platform-Architecture/architectural-components/monitoring/automated-monitoring-implementation/index.html create mode 100644 Platform-Architecture/architectural-components/monitoring/network-watcher/index.html create mode 100644 Platform-Architecture/architectural-components/monitoring/overview/index.html create mode 100644 Platform-Architecture/architectural-components/secrets-and-configuration/index.html create mode 100644 Platform-Architecture/architecture-overview/index.html create mode 100644 Platform-Architecture/environments/index.html create mode 100644 Platform-Architecture/integration-patterns/dynamics-and-platform-platform/index.html create mode 100644 Platform-Architecture/integration-patterns/overview/index.html create mode 100644 Platform-Architecture/permissions-model/index.html create mode 100644 Platform-Architecture/platform-azure-services/ai-services/index.html create mode 100644 Platform-Architecture/platform-azure-services/data-services/index.html create mode 100644 Platform-Architecture/platform-azure-services/integration-services/index.html create mode 100644 Platform-Architecture/scaling/index.html create mode 100644 Platform-Architecture/tech-radar/index.html create mode 100644 Platform-Strategy/adp-platform-strategy/index.html create mode 100644 Platform-Strategy/documentation-approach/index.html create mode 100644 Platform-Strategy/service-deployment-strategy/index.html create mode 100644 Platform-Strategy/service-versioning-strategy/index.html create mode 100644 assets/images/favicon.png create mode 100644 assets/javascripts/bundle.ad660dcc.min.js create mode 100644 assets/javascripts/bundle.ad660dcc.min.js.map create mode 100644 assets/javascripts/lunr/min/lunr.ar.min.js create mode 100644 assets/javascripts/lunr/min/lunr.da.min.js create mode 100644 assets/javascripts/lunr/min/lunr.de.min.js create mode 100644 assets/javascripts/lunr/min/lunr.du.min.js create mode 100644 assets/javascripts/lunr/min/lunr.el.min.js create mode 100644 assets/javascripts/lunr/min/lunr.es.min.js create mode 100644 assets/javascripts/lunr/min/lunr.fi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.fr.min.js create mode 100644 assets/javascripts/lunr/min/lunr.he.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hu.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hy.min.js create mode 100644 assets/javascripts/lunr/min/lunr.it.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ja.min.js create mode 100644 assets/javascripts/lunr/min/lunr.jp.min.js create mode 100644 assets/javascripts/lunr/min/lunr.kn.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ko.min.js create mode 100644 assets/javascripts/lunr/min/lunr.multi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.nl.min.js create mode 100644 assets/javascripts/lunr/min/lunr.no.min.js create mode 100644 assets/javascripts/lunr/min/lunr.pt.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ro.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ru.min.js create mode 100644 assets/javascripts/lunr/min/lunr.sa.min.js create mode 100644 assets/javascripts/lunr/min/lunr.stemmer.support.min.js create mode 100644 assets/javascripts/lunr/min/lunr.sv.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ta.min.js create mode 100644 assets/javascripts/lunr/min/lunr.te.min.js create mode 100644 assets/javascripts/lunr/min/lunr.th.min.js create mode 100644 assets/javascripts/lunr/min/lunr.tr.min.js create mode 100644 assets/javascripts/lunr/min/lunr.vi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.zh.min.js create mode 100644 assets/javascripts/lunr/tinyseg.js create mode 100644 assets/javascripts/lunr/wordcut.js create mode 100644 assets/javascripts/workers/search.b8dbb3d2.min.js create mode 100644 assets/javascripts/workers/search.b8dbb3d2.min.js.map create mode 100644 assets/stylesheets/main.6543a935.min.css create mode 100644 assets/stylesheets/main.6543a935.min.css.map create mode 100644 assets/stylesheets/palette.06af60db.min.css create mode 100644 assets/stylesheets/palette.06af60db.min.css.map create mode 100644 images/ADP Tools Landscape.png create mode 100644 images/Import-secrets-to-Key-Vault.png create mode 100644 images/adp-create-delivery-programme.png create mode 100644 images/adp-create-delivery-project.png create mode 100644 images/adp-data-high-level-process-flow.png create mode 100644 images/adp-data-portal-permissions.png create mode 100644 images/adp-data.png create mode 100644 images/adp-tech-radar.png create mode 100644 images/adp-view-delivery-programme.png create mode 100644 images/adp-view-delivery-project.png create mode 100644 images/adp-view-edit-delivery-programme.png create mode 100644 images/adp-view-edit-delivery-project.png create mode 100644 images/aks-and-microservices.png create mode 100644 images/aks-microservices-advanced-production-deployment.png create mode 100644 images/android-chrome-192x192.png create mode 100644 images/android-chrome-512x512.png create mode 100644 images/appconfig.png create mode 100644 images/apple-touch-icon.png create mode 100644 images/application-hosting.png create mode 100644 images/aso-setup.png create mode 100644 images/config-and-secrets.png create mode 100644 images/config-structure.png create mode 100644 images/creation-of-service.png create mode 100644 images/delivery-project-id.png create mode 100644 images/demo-business-context.png create mode 100644 images/demo-microservice-architect-2.png create mode 100644 images/demo-microservice-architecture.png create mode 100644 images/diagrams/adp-copilot.png create mode 100644 images/diagrams/ai-services-0.1.png create mode 100644 images/diagrams/ai-services.png create mode 100644 images/diagrams/portal-db.png create mode 100644 images/documentation-approach.jfif create mode 100644 images/favicon-16x16.png create mode 100644 images/favicon-32x32.png create mode 100644 images/favicon-big.png create mode 100644 images/favicon.ico create mode 100644 images/favicon.png create mode 100644 images/flux-dashboard.png create mode 100644 images/gitops-for-aks.png create mode 100644 images/helm-chart-values.png create mode 100644 images/import-appconfig.png create mode 100644 images/infra-repos.png create mode 100644 images/istio-architecture.png create mode 100644 images/istio-installation.png create mode 100644 images/istio-mutual-tls.png create mode 100644 images/istio-permissive-mtls.png create mode 100644 images/istio-strict-mtls.png create mode 100644 images/jeegerui.png create mode 100644 images/kaili.png create mode 100644 images/keyvault-secretes.png create mode 100644 images/logos/aisearch.png create mode 100644 images/logos/openai.png create mode 100644 images/managed-prometheus-dashboard.png create mode 100644 images/mointor-cluster.png create mode 100644 images/monitor-containers.png create mode 100644 images/monitor-grafana.png create mode 100644 images/monitor-insights-nodes.png create mode 100644 images/montior-insights.png create mode 100644 images/network-watcher.png create mode 100644 images/objectives-adp.png create mode 100644 images/pipeline-layered-delivery.png create mode 100644 images/pipeline-parameters.png create mode 100644 images/pipeline-run-complete.png create mode 100644 images/pipeline-screenshot.png create mode 100644 images/post-deployment-trigger-design.png create mode 100644 images/project-migration-stages.PNG create mode 100644 images/project-migration-timeline.PNG create mode 100644 images/qa/ADP QA Testing Pyramid.png create mode 100644 images/qa/JMeter-How-Use-Variables-In-Script.png create mode 100644 images/qa/JMeter-Perf-Test-Set-DefaultValues.png create mode 100644 images/recommended-alerts.png create mode 100644 images/run-appconfig.png create mode 100644 images/run-pipeline.png create mode 100644 images/site.webmanifest create mode 100644 images/variable-group-keyvalue.png create mode 100644 images/variable-group.png create mode 100644 images/vision-board.png create mode 100644 images/waiting-stage.png create mode 100644 images/yaml-pipeline.png create mode 100644 index.html create mode 100644 javascripts/loader.js create mode 100644 javascripts/tablesort.js create mode 100644 search/lunr.js create mode 100644 search/main.js create mode 100644 search/search_index.json create mode 100644 search/worker.js create mode 100644 sitemap.xml create mode 100644 sitemap.xml.gz create mode 100644 techdocs_metadata.json create mode 100644 test/index.html diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..1b930ee --- /dev/null +++ b/404.html @@ -0,0 +1,2464 @@ + + + +
+ + + + + + + + + + + + + + +Welcome to the Azure Developer Portal (ADP) repository. The portal is built using Backstage.
+make
and build-essential
packages installedcurl
or wget
installedSee the Backstage Getting Started documentation for the full list of prerequisites.
+The portal is integrated with various 3rd party services. Connections to these services are managed through the environment variables below:
+Development can be done within a devcontainer if you wish. Once the devcontainer is set up, simply fill out the .env
file at the root and rebuild the container. Once rebuilt, you will need to log into the az cli to the tenant you wish to connect to using az login --tenant <TenantId>
If you are using VSCode, the steps are as follows:
+Dev Containers: Clone Repository in Container Volume
commandhttps://github.com/DEFRA/adp-portal.git
.env
file at the root, and fill it out with the variables described belowDev Containers: Rebuild Container
commandaz login --tenant <YOUR_TENANT_ID>
yarn dev
To sign commits using GPG from within the devcontainer, please follow the steps here
+The application requires the following environment variables to be set. We recommend creating a .env
file in the root of your repo (this is ignored by Git) and pasting the variables in to this file. This file will be used whenever you run a script through yarn
such as yarn dev
.
To convert a GitHub private key into a format that can be used in the GITHUB_PRIVATE_KEY
environment variable use one of the following scripts:
Powershell
+Shell
+A hybrid strategy is implemented for techdocs which means documentation can be generated on the fly by out of the box generator or using an external pipeline. +All generated documentation are stored in Azure blob storage.
+For more info please refer : Ref
+Run the following commands from the root of the repository:
+ +If you want to override any settings in ./app-config.yaml
, create a local configuration file named app-config.local.yaml
and define your overrides here.
You need to have the azure cli installed and the azure development client installed
+Login into both az, and azd before running the server.
+ +You must run the application in the same browser session, that the authentication ran in. If you use a "private window", new session, it will not have access to the required cookies to complete authentication, and you will get a 'user not found' error message
+If you have an idea for a new feature or an improvement to an existing feature, please follow these steps:
+If you're ready to submit your code changes, please follow these steps specified in the pull_request_template
+To maintain a consistent code style throughout the project, please adhere to the following guidelines:
+Include information about the project's license and any relevant copyright notices.
+ + + + + + + + + + + + + +A Helm chart library that captures general configuration for Azure Service Operator (ASO) resources. It can be used by any microservice Helm chart to import AzureServiceOperator K8s object templates configured to run on the ADP platform.
+In your microservice Helm chart:
+ * Update Chart.yaml
to apiVersion: v2
.
+ * Add the library chart under dependencies
and choose the version you want (example below). Version number can include ~
or ^
to pick up latest PATCH and MINOR versions respectively.
+ * Issue the following commands to add the repo that contains the library chart, update the repo, then update dependencies in your Helm chart:
An example Demo microservice Chart.yaml
:
NOTE: We will use ACR where ASO Helm Library Chart can be published. So above dependencies will be changed to import library from ACR (In Progress).
+First, follow the instructions for including the ASO Helm library chart.
+The ASO Helm library chart has been configured using the conventions described in the Helm library chart documentation. The K8s object templates provide settings shared by all objects of that type, which can be augmented with extra settings from the parent (Demo microservice) chart. The library object templates will merge the library and parent templates. In the case where settings are defined in both the library and parent chart, the parent chart settings will take precedence, so library chart settings can be overridden. The library object templates will expect values to be set in the parent .values.yaml
. Any required values (defined for each template below) that are not provided will result in an error message when processing the template (helm install
, helm upgrade
, helm template
).
The general strategy for using one of the library templates in the parent microservice Helm chart is to create a template for the K8s object formateted as so:
+All the K8s object templates in the library require the following values to be set in the parent microservice Helm chart's values.yaml
:
The below values are used by the ASO templates internally, and their values are set using platform variables
in adp-flux-services
repository.
for e.g. NameSpace Queues will get created inside serviceBusNamespaceName
namespace and postgres database will get created inside postgresServerName
server.
Whilst the Platform orchestration will manage the 'platform' level variables, they can be optionally supplied in some circumstances. Examples include in sandpit/development when testing against team-specific infrastructure (that isn't Platform shared). So, if you have a dedicated Service Bus or Database Server instance, you can point to those to ensure you apps works as expected. Otherwise, don't supply the Platform level variables as these will be automatically managed and orchestrated throughout all the environments appropriately against core shared infrastructure. You (as a Platform Tenant) just supply your team-specific/instance specific infrastructure config' (i.e. Queues, Storage Accounts, Databases).
+_namespace-queue.yaml
adp-aso-helm-library.namespace-queue
An ASO NamespacesQueue
object to create a Microsoft.ServiceBus/namespaces/queues resource.
A basic usage of this object template would involve the creation of templates/namespace-queue.yaml
in the parent Helm chart (e.g. adp-microservice
) containing:
The following values need to be set in the parent chart's values.yaml
in addition to the globally required values listed above.
Note that namespaceQueues
is an array of objects that can be used to create more than one queue.
Please note that the queue name is prefixed with the namespace
internally.
+For example, if the namespace name is "adp-demo" and you have provided the queue name as "queue1", then in the service bus, it creates a queue with the adp-demo-queue1
name.
The following values can optionally be set in the parent chart's values.yaml
to set the other properties for servicebus queues:
owner
property is used to control the ownership of the queue. The default value is yes
and you don't need to provide it if you are creating and owning the queue.
+If you are creating only role assignments for the queue you do not own, then you should explicitly set the owner
flag to no
so that it will only create the role assignments on the existing queue.
This template also optionally allows you to create Role Assignments
by providing roleAssignments
properties in the namespaceQueues object.
Below are the minimum values that are required to be set in the parent chart's values.yaml to create a roleAssignments
.
If you are creating only role assignments for the queue you do not own, then you should explicitly set the owner
flag to no
so that it will only create the role assignments on the existing queue.
The following section provides usage examples for the Namespace Queues template.
+claim
queue. Note that owner
is set to no
.¶_namespace-topic.yaml
adp-aso-helm-library.namespace-topic
An ASO NamespacesTopic
object to create a Microsoft.ServiceBus/namespaces/topics resource.
A basic usage of this object template would involve the creation of templates/namespace-topic.yaml
in the parent Helm chart (e.g. adp-microservice
) containing:
The following values need to be set in the parent chart's values.yaml
in addition to the globally required values listed above.
Note that namespaceTopics
is an array of objects that can be used to create more than one topic.
Please note that the topic name is prefixed with the namespace
internally.
+For example, if the namespace name is "adp-demo" and you have provided the topic name as "topic1," then in the service bus, it creates a topic with the "adp-demo-topic1" name.
The following values can optionally be set in the parent chart's values.yaml
to set the other properties for namespaceTopics
:
owner
property is used to control the ownership of the topic. The default value is yes
and you don't need to provide it if you are creating and owning the topic.
+If you are only creating role assignments for the topic you do not own, then you should explicitly set the owner
flag to no
so that it will only create the role assignments on the existing topic.
This template also optionally allows you to create Role Assignments
by providing roleAssignments
properties in the namespaceTopics object.
Below are the minimum values that are required to be set in the parent chart's values.yaml to create a roleAssignments
.
If you are creating only role assignments for the Topic you do not own, then you should explicitly set the owner
flag to no
so that it will only create the role assignments on the existing Topic (See Example 2 in Usage examples section).
This template also optionally allows you to create Topic Subscriptions
and Topic Subscriptions Rules
for a given topic by providing Subscriptions and SubscriptionRules properties in the topic object.
Below are the minimum values that are required to be set in the parent chart's values.yaml
to create a NamespacesTopic
, NamespacesTopicsSubscription
and NamespacesTopicsSubscriptionsRule
To create topicSubscriptions
inside already existing topics, set the property owner
to no
. By default owner
is set to yes
which creates the topic name defined in values (See Example 4 in Usage examples section).
topicSubscriptions
¶The following values can optionally be set in the parent chart's values.yaml
to set the other properties for topicSubscriptions
:
topicSubscriptionRules
¶The following values can optionally be set in the parent chart's values.yaml
to set the other properties for topicSubscriptionRules
:
The following section provides usage examples for the Namespace Topic template.
+claim-notify
Topic. Note that owner
is set to no
.¶_flexible-servers-db.yaml
adp-aso-helm-library.flexible-servers-db
An ASO FlexibleServersDatabase
object.
A basic usage of this object template would involve the creation of templates/flexible-servers-db.yaml
in the parent Helm chart (e.g. adp-microservice
) containing:
The following values need to be set in the parent chart's values.yaml
in addition to the globally required values listed above:
+
namespace
internally. For example, if the namespace name is "adp-microservice" and you have provided the DB name as "demo-db," then in the postgres server, it creates a database with the name "adp-microservice-demo-db".
+The following section provides usage examples for the Flexible-Servers-Db template.
+payment
database¶_userassignedidentity.yaml
adp-aso-helm-library.userassignedidentity
An ASO UserAssignedIdentity
object to create a Microsoft.ManagedIdentity/userAssignedIdentities resource.
A basic usage of this object template would involve the creation of templates/userassignedidentity.yaml
in the parent Helm chart (e.g. adp-microservice
) containing:
This template uses the below values, whose values are set using platform variables in the adp-flux-services
repository as a part of the service's ASO helmrelease value configuration, and you don't need to set them explicitly in the values.yaml file.
UserAssignedIdentity Name is derived internally, and it is set to = {TEAM_MI_PREFIX}-{SERVICE_NAME}
For e.g. In SND1 if the TEAM_MI_PREFIX
value is set to "sndadpinfmid1401" and SERVICE_NAME
value is set to "adp-demo-service", then UserAssignedIdentity
value will be : "sndadpinfmid1401-adp-demo-service".
The following values can optionally be set in the parent chart's values.yaml
to set the other properties for servicebus queues:
This template also optionally allows you to create Federated credentials
for a given User Assigned Identity by providing federatedCreds
properties in the userAssignedIdentity object.
Below are the minimum values that are required to be set in the parent chart's values.yaml to create a userAssignedIdentity
and federatedCreds
.
The following section provides usage examples for the UserAssignedIdentity template.
+_storage-account.yaml
adp-aso-helm-library.storage-account.yaml
++Version 2.0.0 and above
+Starting from version 2.0.1, the Storage Account has been enhanced with role assignments. These data role assignments are now scoped at the storage account level, introducing two new data roles: DataWriter and DataReader.
+The DataWriter role grants applications the ability to both read and write data in the blob container, tables, and files. Conversely, the DataReader role provides applications with read-only access to data in the blob container, tables, and files.
+
An ASO StorageAccount
object to create a Microsoft.Storage/storageAccounts resource and optionally sub resources Blob Containers and Tables.
By default, private endpoints are always enabled on storage accounts and publicNetworkAccess is disabled. Optionally, you can also configure ipRules in scenarios where you want to limit access to your storage account to requests originating from specified IP addresses. |
+
---|
+ |
Please be aware that this template only includes A records in the central DNS zone for the Dev, Tst, Pre, and Prd environments. For Sandpit environments snd1, snd2, and snd3, it currently only generates a private endpoint without adding an A record to the DNS zone. You will need to separately add this entry via PowerShell script. | +
---|
+ |
With this template, you can create the below resources. + - Storage Accounts + - Blob containers and RoleAssignments + - Tables and RoleAssignments
+A basic usage of this object template would involve the creation of templates/storage-account.yaml
in the parent Helm chart (e.g. adp-microservice
) containing:
Below are the default values used by the the storage account template internally, and they cannot be overridden by the user from the values.yaml
file.
The following values need to be set in the parent chart's values.yaml
in addition to the globally required values listed above.
Note that storageAccounts
is an array of objects that can be used to create more than one Storage Accounts.
Please note that the storage account name must be unique across Azure.
+storage account name is internally prefixed with the storageAccountPrefix
.
+For instance, in the Dev environment, the storageAccountPrefix is configured as devadpinfst2401
. If you input "claim" as the storage account name, the final storage account name will be devadpinfst2401claim
.
The following values need to be set in the parent chart's values.yaml
in addition to the globally required values listed above.
The following values can optionally be set in the parent chart's values.yaml
to set the other properties for storageAccounts
:
For detailed description of each property see here
+owner
property is used to control the ownership of the storage account. The default value is yes
and you don't need to provide it if you are creating and owning the storage account.
+If you are creating Blob containers or Tables on the existing storage account that you do not own, then you should explicitly set the owner
flag to no
so that it will only create Blob containers or Tables on the existing storage account.
The following section provides usage examples for the storage account template.
+The table below shows the Azure Service Operator (ASO) resource naming convention in Azure and Kubernetes:
+In the example below, the following platform values are used for demonstration purposes: +- namespace = 'ffc-demo' +- serviceName = 'ffc-demo-web' +- teamMIPrefix = 'sndadpinfmi1401' +- storageAccountPrefix = 'sndadpinfst1401' +- privateEndpointPrefix = 'sndadpinfpe1401' +- postgresServerName = 'sndadpdbsps1401' +- userassignedidentityName = 'sndadpinfmi1401-ffc-demo-web'
+And the following user input values are used for demonstration purposes:
+Resource Type | +Resource Name Format in Azure |
+Resource Name Example in Azure |
+Resource Name Format in Kubernetes |
+Resource Name Example in Kubernetes |
+
---|---|---|---|---|
NamespacesQueue | +{namespace}-{QueueName} | +ffc-demo-queue01 | +{namespace}-{QueueName} | +ffc-demo-queue01 | +
Queue RoleAssignment | +NA | +NA | +{userassignedidentityName}-{QueueName}-{RoleName}-rbac-{index} | +sndadpinfmi1401-ffc-demo-web-ffc-demo-queue01-queuereceiver-rbac-0 | +
NamespacesTopic | +{namespace}-{TopicName} | +ffc-demo-topic01 | +{namespace}-{TopicName} | +ffc-demo-topic01 | +
NamespacesTopicsSubscription | +{namespace}-{TopicSubName} | +ffc-demo-topicSub01 | +{namespace}-{TopicName}-{TopicSubName}-subscription | +ffc-demo-topic01-topicsub01-subscription | +
Topic RoleAssignment | +NA | +NA | +{userassignedidentityName}-{TopicName}-{RoleName}-rbac-{index} | +sndadpinfmi1401-ffc-demo-web-ffc-demo-topic01-topicreceiver-rbac-0 | +
Postgres Database | +{namespace}-{DatabaseName} | +ffc-demo-claim | +{postgresServerName}-{namespace}-{DatabaseName} | +sndadpdbsps1401-ffc-demo-claim | +
Manage Idenitty | +{teamMIPrefix}-{serviceName} | +sndadpinfmi1401-ffc-demo-web | +{teamMIPrefix}-{serviceName} | +sndadpinfmi1401-ffc-demo-web | +
StorageAccount | +{storageAccountPrefix}{StorageAccountName} | +sndadpinfst1401demo | +{serviceName}-{StorageAccountName} | +ffc-demo-web-sndadpinfst1401demo | +
StorageAccountsBlobService | +default | +default | +{serviceName}-{StorageAccountName}-default | +ffc-demo-web-sndadpinfst1401demo-default | +
StorageAccountsBlobServicesContainer | +{ContainerName} | +container-01 | +{serviceName}-{StorageAccountName}-default-{ContainerName} | +ffc-demo-web-sndadpinfst1401demo-default-container-01 | +
StorageAccountsTableServicesTable | +{TableName} | +table01 | +{serviceName}-{StorageAccountName}-default-{TableName} | +ffc-demo-web-sndadpinfst1401demo-default-table01 | +
PrivateEndpoint | +{privateEndpointPrefix}-{ResourceName}-{SubResource} | +sndadpinfpe1401-sndadpinfst1401demo-blob | +{privateEndpointPrefix}-{ResourceName}-{SubResource} | +sndadpinfpe1401-sndadpinfst1401demo-blob | +
PrivateEndpointsPrivateDnsZoneGroup | +default | +default | +{PrivateEndpointName}-default | +sndadpinfpe1401-sndadpinfst1401demo-blob-default | +
In addition to the K8s object templates described above, a number of helper templates are defined in _helpers.tpl
that are both used within the library chart and available to use within a consuming parent chart.
adp-aso-helm-library.default-check-required-msg
{{- include "adp-aso-helm-library.default-check-required-msg" . }}
A template defining the default message to print when checking for a required value within the library. This is not designed to be used outside of the library.
+adp-aso-helm-library.commonTags
{{- include "adp-aso-helm-library.commonTags" $ | nindent 4 }}
($
is mapped to the root scope)Common tags to apply to tags
of all ASO resource objects on the ADP K8s platform. This template relies on the globally required values listed above.
1 |
|
For the Azure Service Operator to not delete the resources created in Azure on the deletion of the kubernetes resource manifest files, the below section can be added to Values.yaml
in the parent helm chart.
This specifies the reconcile policy to be used and can be set to manage
, skip
or detach-on-delete
. More info over here.
THIS INFORMATION IS LICENSED UNDER THE CONDITIONS OF THE OPEN GOVERNMENT LICENCE found at:
+http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3
+The following attribution statement MUST be cited in your products and applications when using this information.
+++Contains public sector information licensed under the Open Government license v3
+
The Open Government Licence (OGL) was developed by the Controller of Her Majesty's Stationery Office (HMSO) to enable information providers in the public sector to license the use and re-use of their information under a common open licence.
+It is designed to encourage use and re-use of information freely and flexibly, with only a few conditions.
+ + + + + + + + + + + + + +We have now implemented an abstraction layer within the adp-helm-library that allows the dynamic allocation of memory and CPU resources based on the memory and cpu tier provided in the values.yaml file.
+The new memory and cpu tier values are in the below table:
+TIER | +CPU-REQUEST | +CPU-LIMIT | +MEMORY-REQUEST | +MEMORY-LIMIT | +
---|---|---|---|---|
S | +50m | +50m | +50Mi | +50Mi | +
M | +100m | +100m | +100Mi | +100Mi | +
L | +150m | +150m | +150Mi | +150Mi | +
XL | +200m | +200m | +200Mi | +200Mi | +
XXL | +300m | +600m | +300Mi | +600Mi | +
CUSTOM | +<?> | +<?> | +<?> | +<?> | +
Instructions
+The following values can optionally be set in a values.yaml
to select the required CPU and Memory for a container:
example 1 - select an Extra Large (XL) tier:
+ +example 2 - select an Small (S) tier:
+ +example 3 - select custom values and provide your own values if the TIER sizes don't fit your requirements.
+example 4 - The default is Medium (M). If this works for you then you don't need to pass a memCpuTier.
+NOTE: +If you do not add a 'memCpuTier' then the Tier will default to 'M'
+NOTE: +You can also choose CUSTOM and provide your own values if the TIER sizes don't fit your requirements. +If you choose CUSTOM, requestMemory, requestCpu, limitMemory and limitCpu are required.
+IMPORTANT: +Your team namespace will be given a fixed amount of resources via ResourceQuotas. +Once your cumulative resource request total passes the assigned quota on your namespace, all further deployments will be unsuccessful. If you require an increase to your ResourceQuota, you will need to raise a request via the ADP team. It's important you monitor the performance of your application and adjust pod requests and limits accordingly. Please choose the appropriate cpu and memory tier for your application or provide custom values for your CPU and Memory requests and limits.
+References
+https://learn.microsoft.com/en-us/azure/aks/developer-best-practices-resource-management
+ + + + + + + + + + + + + + + + + +This document outlines the QA approach for the Azure Developer Platform (APD). The objective of the quality assurance is to ensure that all business applications developed and hosted on the ADP meet DEFRA's standards of quality, reliability and performance.
+The Quality Assurance approach follows the traditional QA Pyramid that modes how software testing is categorised and layered.
+ +Below are the tools that are currently supported on the ADP
+Type of Test | +Tooling | +
---|---|
Unit Testing | +"C# NUnit/ xUnit, Nsubstitute: NodeJS: Jest |
+
Functional/Acceptance | +WebDriver.IO | +
Security Testing | +"OWASP ZAP (Zed Attack Proxy) | +
API Testing (Contract Testing) | +PACT Broker | +
Accessibility Testing | +AXE Lighthouse? |
+
Performance Testing | +JMeter, BrowserStack. Azure Load Testing is under consideration |
+
Exploratory Testing (Manual) | +ADO Test Plans | +
Development teams use the ADP Portal to scaffold a new service using one of the exemplar software templates (refer to How to create a platform service). Based on the template type (frontend or backend), basic tests will be included that the teams can build on as they add more functionality to the service.
+The ADP Platform provides the ability to execute the above tests. These tests are executed as post deployment tests. The pipeline will check for the existence of specific docker-compose test files to determine if it can run tests. Refer to the how-to-guides for the different types of tests.
+++However, it is the responsibility of the delivery projects to ensure that the business services they are delivering have written sufficient tests for the different types of tests that meet DEFRA's standards.
+
The supported programming frameworks are .NET/C# and NodeJS/Javascript.
+The unit tests are executed in the CI Build Pipeline. SonarQube analysis has been integrated in the ADP Core Pipeline Teplate to ensure the code conforms to the DEFRA quality standards.
+Links to the SonarCloud analysis, Synk Analysis will available in the component page of the service in the ADP Portal.
+These end-to-end tests for internal (via Open VPN) or public endpoints for frontends and APIs.
+Refer to the Guide on how to create an Acceptance Test
+These tests should be executed against internal (via Open VPN) or public endpoints for frontends and APIs. Docker is used with BrowserStack to execute the peformance tests.
+As a pre-requisite, Non Functional Requirements should be defined by the delivery project to set the baseline for the expected behavior e.g. expected average API response time, page load duration.
+There are various types of performance tests.
+Refer to the Guide on how to create a Performance Test
+These tests verify that the all DEFRA public websites/business services are in compliance with WCAG 2.2 AA accessibility standard
+Refer to the guidiance on Understanding accessibility requirements for public sector bodies
+SonarQube Security Testing has been incorporated into the CI Build Pipeline. In addition to that, OWASP ZAP is executed as per of the post deployment tests.
+ + + + + + + + + + + + + + + + +ADP Services Secret Management
+The secrets in ADP services are managed using Azure Key Vault. The secretes for each individual services are imported to the Azure Key Vault through ADO Pipeline tasks and are accessed by the services by using the application configuration YAML files. The importing of secretes to the Key Vault and referencing them by individual services are automated using ADO Pipelines. The detailed workflow for secret management includes the following steps:
+1. Configure ADO Library
+Create variable groups for each environments of the service in ADO Library.
+Naming Convection: It follows the following convention for creating the variable groups for a service.
+1 |
|
Example: The variable groups for different environment of a service are shown below.
+ +Add the variables and the values for the secretes in each of the variable groups.
+Variable Naming Convection: It follows the following convention for creating the variables in variable groups.
+1 |
|
Example: Secrete variables for a service are shown below.
+ +2. ADO Pipeline - Import secrets to Key Vault
+The variables and values from the variable groups are automated to import to the Azure Key Vault using +Pipeline task and Power Shell scripts.
+Example: The code snippet involved in importing the secrets to the Azure Key Vault is + shown below.
+ +3. Azure Key Vault - Imported secretes
+After the secretes are added to the ADO Library variable groups and the service CI pipeline run successfully would import the secrets to the Key Vault as shown below.
+Example: Secretes imported to the Key Vault for a service are shown below
+ +4. App Config
+Access the secrets from the Key Vault through appConfig YAML files included in each of the services. +There are two different kinds of appConfig files.
+Environment specific appConfig file: Each service has it own environment specific appConfig file to access it + respective secrets from the Key Vault.
+File Naming Convection:
+1 |
|
Example: The appConfig files for different environments for a service are shown below.
+ +The type of the variable (key) that reference the secretes form the Key Vault should be defined as type: "keyvault" in the config YAML file.
+4. ADO Pipeline - Import App Config
+The Pipeline tasks shown below use the environment specific appConfig YAML files to import the secrets from Azure Key Vault to the service.
+5. Run Pipeline - appConfig only
+The secretes can be added to the Key Vault and also referenced by the service using the appConfig files. This can be achieved by running the pipeline on selecting the Deploy App Config check box. This helps in running only the secrete management tasks instated of running all the tasks in the pipeline. This is useful when updating the secretes of a service.
+ + + + + + + + + + + + + + +The Azure Development Platform Portal built using Backstage.
+The Portal enables users to self service by providing the functionality below.
+Include instructions for getting the project set up locally. This may include steps to install dependencies, configure the development environment, and run the project.
+ + + + + + + + + + + + + + +ADP enables authorised users to self-service through the platform, allowing them to create and manage required arms-length bodies, delivery programmes, and delivery teams. The data added can subsequently be edited by those authorised users and viewed by all.
+The diagram below illustrates the high-level process flow of user journeys, distinguishing between four types of users: ADP Admins, Programme Managers, Project Managers, and Project Developers. ADP Admins have the authority to create new ALBs (Arms-Length Bodies) and initially seed Programme Managers. Programme Managers are able to onboard additional Programme and Project Managers, as well as to create Delivery Programmes and Projects. Project Managers have the capability to create new Delivery Projects and onboard Delivery Project Members. Finally, Project Developers are tasked with creating and managing platform services.
+ +In the table below you can see the permissions per ADP Persona. Please note that users are not restricted to one role/persona. A single person may be a Programme Manager, a Team Manager for a team in their Programme and a developer within that team.
+ +Method | +Parameters | +Example Request Body | +Example Response | +
---|---|---|---|
GET | ++ | N/A | +[{ "creator":"user:default/johnDoe", "owner":"owner value", "title":"ALB 1", "alias":"ALB", "description": "ALB description", "url": null, "name":"alb-1", "id": "123", "created_at": "2024-02-26T15:58:40.337Z", "updated_at": "2024-02-26T15:58:40.337Z","updated_by": "user:default/johnDoe"} , { "creator":"user:default/johnDoe", "owner":"owner value", "title":"ALB 2", "alias":"ALB", "description": "ALB description", "url": null, "name":"alb-2", "id": "1234", "created_at": "2024-02-26T15:58:40.337Z", "updated_at": "2024-02-26T15:58:40.337Z","updated_by": "user:default/johnDoe"}] |
+
GET | +id |
+N/A | +{ "creator":"user:default/johnDoe", "owner":"owner value", "title":"ALB 1", "alias":"ALB", "description": "ALB description", "url": null, "name":"alb-1", "id": "123", "created_at": "2024-02-26T15:58:40.337Z", "updated_at": "2024-02-26T15:58:40.337Z","updated_by": "user:default/johnDoe"} |
+
POST | ++ | { "title": "ALB", "description": "ALB Description" } |
+{ "title": "ALB", "description": "ALB Description" , "url": null, "alias": null, "name": "alb", "creator":"user:default/johnDoe", "owner":"owner value", "id": "12345","created_at": "2024-02-26T15:58:40.337Z", "updated_at": "2024-02-26T15:58:40.337Z","updated_by": "user:default/johnDoe"} |
+
PATCH | +id |
+{ "id": "12345", "title": "Updated ALB Title" } |
+{ "title": "Updated ALB Title", "description": "ALB Description" , "url": null, "alias": null, "name": "alb", "creator":"user:default/johnDoe", "owner":"owner value", "id": "12345","created_at": "2024-02-26T15:58:40.337Z", "updated_at": "2024-02-26T15:58:40.337Z","updated_by": "user:default/johnDoe"} |
+
Method | +Example Response | +
---|---|
GET | +{"123": "ALB 1","1234": "ALB 2","12345": "ALB 3","123456": "ALB 4"} |
+
Method | +Parameters | +Example Request Body | +Example Response | +
---|---|---|---|
GET | ++ | N/A | +[{ "id": "123", "programme_managers":[], "arms_length_body_id": "12345", "title": "Delivery Programme 1", "name": "delivery-programme-1", "alias": "Delivery Programme", "description": "Delivery Programme description", "finance_code": "1", "delivery_programme_code": "123", "url": "exampleurl.com" , "created_at": "2024-02-26T15:58:40.337Z", "updated_at": "2024-02-26T15:58:40.337Z","updated_by": "user:default/johnDoe"}, { "id": "1234", "programme_managers":[], "arms_length_body_id": "12345", "title": "Delivery Programme 2", "name": "delivery-programme-2", "alias": "Delivery Programme", "description": "Delivery Programme description", "finance_code": "1", "delivery_programme_code": "123", "url": "exampleurl.com" , "created_at": "2024-02-26T15:58:40.337Z", "updated_at": "2024-02-26T15:58:40.337Z","updated_by": "user:default/johnDoe"}] |
+
GET | +id |
+N/A | +{ "id": "123", "programme_managers":[{"aad_entity_ref_id": "123", "id": "1", "delivery_programme_id" :"123", "email": "email@example.com", "name": "John Doe"}], "arms_length_body_id": "12345", "title": "Delivery Programme 1", "name": "delivery-programme-1", "alias": "Delivery Programme", "description": "Delivery Programme description", "finance_code": "1", "delivery_programme_code": "123", "url": "exampleurl.com" , "created_at": "2024-02-26T15:58:40.337Z", "updated_at": "2024-02-26T15:58:40.337Z","updated_by": "user:default/johnDoe"} |
+
POST | ++ | { "programme_managers":[{"aad_entity_ref_id": "123"}, {"aad_entity_ref_id": "1234"}], "arms_length_body_id": "12345", "title": "Delivery Programme", "alias": "Delivery Programme", "description": "Delivery Programme description", "finance_code": "1", "delivery_programme_code": "123", "url": "exampleurl.com" } |
+{ "id": "1234", "programme_managers":[], "arms_length_body_id": "12345", "title": "Delivery Programme", "name": "delivery-programme", "alias": "Delivery Programme", "description": "Delivery Programme description", "finance_code": "1", "delivery_programme_code": "123", "url": "exampleurl.com" , "created_at": "2024-02-26T15:58:40.337Z", "updated_at": "2024-02-26T15:58:40.337Z","updated_by": "user:default/johnDoe"} |
+
PATCH | ++ | { "id": "1234", "title": "Updated Delivery Programme Title" } |
+{ "id": "1234", "programme_managers":[], "arms_length_body_id": "12345", "title": "Updated Delivery Programme Title, "name": "delivery-programme", "alias": "Delivery Programme", "description": "Delivery Programme description", "finance_code": "1", "delivery_programme_code": "123", "url": "exampleurl.com" , "created_at": "2024-02-26T15:58:40.337Z", "updated_at": "2024-02-26T15:58:40.337Z","updated_by": "user:default/johnDoe"} |
+
Method | +Example Response | +
---|---|
GET | +[{"id": "5464de88-bc76-4a0b-a491-77284c392dab","delivery_programme_id": "0bd0cb6b-569a-4c0f-bc6d-5b8708f45c4a","aad_entity_ref_id": "aad entity ref id 1" "email": "example@defra.onmicrosoft.com","name": "name 1"},{"id": "f0bca259-d0a2-4d30-8166-4569f8e7b6f2","delivery_programme_id": "0bd0cb6b-569a-4c0f-bc6d-5b8708f45c4a","aad_entity_ref_id": "aad entity ref id 2","email": "example@defra.onmicrosoft.com","name": "name 2"}] |
+
Method | +Example Response | +
---|---|
GET | +{"items": [ {"metadata": { "name": "example.onmicrosoft.com", "annotations": {"graph.microsoft.com/user-id": "aad entity ref id 1","microsoft.com/email": "example@defra.onmicrosoft.com"}},"spec": {"profile": {"displayName": "name 1"}}},{"metadata": {"name": "example.onmicrosoft.com","annotations": {"graph.microsoft.com/user-id": "aad entity ref id 2","microsoft.com/email": "example@defra.onmicrosoft.com"}},"spec": {"profile": {"displayName": "name 2"}}}]} |
+
+ | + |
Method | +Parameters | +Example Request Body | +Example Response | +
---|---|---|---|
GET | ++ | N/A | +[{"id": "123","name": "delivery-project-1","title": "Delivery Project 1","alias": "Delivery Project","description": "Delivery Project Description","finance_code": "","delivery_programme_id": "1","delivery_project_code": "1","url": "","ado_project": "","created_at": "2024-04-03T06:41:56.257Z","updated_at": "2024-04-03T08:42:48.242Z","updated_by": "user:default/johnDoe.com"}, {"id": "1234","name": "delivery-project-2","title": "Delivery Project 2","alias": "Delivery Project","description": "Delivery Project Description", "finance_code": "", "delivery_programme_id": "2", "delivery_project_code": "2", "url": "", "ado_project": "", "created_at": "2024-04-03T05:42:31.914Z", "updated_at": "2024-04-03T08:43:03.622Z","updated_by": "user:default/johnDoe"}] |
+
GET | +id |
+N/A | +{"id": "1234","name": "delivery-project-2","title": "Delivery Project 2","alias": "Delivery Project","description": "Delivery Project Description", "finance_code": "", "delivery_programme_id": "2", "delivery_project_code": "2", "url": "", "ado_project": "", "created_at": "2024-04-03T05:42:31.914Z", "updated_at": "2024-04-03T08:43:03.622Z","updated_by": "user:default/johnDoe"} |
+
POST | ++ | "title": "Delivery Project 3","alias": "Delivery Project","description": "Delivery Project Description", "finance_code": "", "delivery_programme_id": "3", "delivery_project_code": "3", "url": "", "ado_project": ""} |
+{"id": "12345","name": "delivery-project-3","title": "Delivery Project 3","alias": "Delivery Project","description": "Delivery Project Description", "finance_code": "", "delivery_programme_id": "3", "delivery_project_code": "3", "url": "", "ado_project": "", "created_at": "2024-04-03T05:42:31.914Z", "updated_at": "2024-04-03T08:43:03.622Z","updated_by": "user:default/johnDoe"} |
+
PATCH | ++ | { "id": "12345", "title": "Updated Delivery Project Title" } |
+{"id": "12345","name": "delivery-project-3","title": "Updated Delivery Project Title","alias": "Delivery Project","description": "Delivery Project Description", "finance_code": "", "delivery_programme_id": "3", "delivery_project_code": "3", "url": "", "ado_project": "", "created_at": "2024-04-03T05:42:31.914Z", "updated_at": "2024-04-03T08:43:03.622Z","updated_by": "user:default/johnDoe"} |
+
We can extend Backstage's functionality by creating and installing plugins. Plugins can either be created by us (1st party) or we can install 3rd party plugins. The majority of 3rd party plugins are free and open source, however there are some exceptions.
+This page tracks the plugins we have installed and the plugins we would like to evaluate.
+Plugin | +Category | +Status | +Author | +Description | +
---|---|---|---|---|
azure-devops | +Catalog | +Implemented | +Backstage | +Displays Pipeline runs on component entity pages. We're not using the repos or README features. Requires components to have two annotations -dev.azure.com/project contains the ADO project name and dev.azure.com/build-definition contains the pipeline name. |
+
GitHub pull requests | +Catalog | +Implemented | +Roadie | +Adds a dashboard displaying GitHub pull requests on component entity pages. Requires components to have the github.com/project-slug in their catalog-info file. |
+
Grafana dashboard | +Catalog | +Implemented | +K-Phoen | +Displays Grafana alerts and dashboards for a component. Note that we cannot use the Dashboard embed - Managed Grafana does not allow us to configure embedding. | +
Azure DevOps scaffolder actions | +Scaffolder | +Implemented | +ADP | +Custom scaffolder actions to get service connections, create and run pipelines, and permit access to ADO resources. Loosely based on the3rd party package by Parfumerie Douglas. | +
GitHub scaffolder actions | +Scaffolder | +Implemented | +ADP | +Custom scaffolder actions to create GitHub teams and assign to repositories. | +
Lighthouse | +Catalog | +Agreed | +Spotify | +Generates on-demand Lighthouse audits and tracks trends directly in Backstage. Helps to improve accessibility, performance and adhere to best practices. Requires PostgreSQL database and a running Lighthouse instance of thelight-house-audit-service API which executes the tests before sending results back to the plugin. | +
SonarQube | +Catalog | +Agreed | +SDA-SE | +Adds frontend visualisation of code statistics from SonarCloud or SonarQube. Requires SonarCloud subscription | +
Prometheus | +Catalog | +Agreed | +Roadie | +Adds Embedded Prometheus Graphs and Alerts into backstage. Requires setting up a new proxy endpoint for the Prometheus API in the app-config.yaml |
+
Flux | +Catalog | +Agreed | +Weaveworks | +The Flux plugin for Backstage provides views of Flux resources available in Kubernetes clusters. | +
Kubernetes | +Catalog | +Agreed | +Spotify | +Kubernetes in Backstage is a tool that's designed around the needs of service owners, not cluster admins. Now developers can easily check the health of their services no matter how or where those services are deployed — whether it's on a local host for testing or in production on dozens of clusters around the world. | +
Snyk | +Catalog | +Assess | +Synk | +Snyk Backstage plugin leverages the Snyk API to enable Snyk data to be visualized directly in Backstage. | +
KubeCost | +Catalog | +Assess | +SuXess-IT | +Kubecost is a plugin to help engineers get information about cost usage/prediction of their deployment. Some development work needed around namespaces. It doesn’t look regularly maintained or updated regularly | +
Status | +Description | +
---|---|
Assess | +Suggestions that we need to evaluate before accepting them into the backlog. | +
Agreed | +Discussed and agreed to accept it, but more work needed to flesh out details. | +
Accepted | +The plugin is suitable for our portal and a story for installing it as been added to the backlog. | +
Implemented | +The plugin has been implemented. | +
Rejected | +The plugin is unsuitable for the portal and we won't be installing it. | +
Category | +Description | +
---|---|
Catalog | +The plugin extends the software catalog, e.g. through a card, or full page dashboard. | +
Scaffolder | +The plugin adds custom actions to the component scaffolder. | +
The ADP Portal is built on Backstage, an open-source platform for building developer portals. Backstage is a Node application which contains a backend API and React based front end. This page outlines the steps you need to follow to set up and run Backstage locally.
+[[TOC]]
+Backstage has a few requirements to be able to run. These are detailed in the Backstage documentation, with some requirements detailed below.
+Backstage requires a UNIX environment. If you're using Linux or a Mac you can skip this section, but if you're on a Windows machine you will need to install WSL.
+WSL can either be installed via the command line (follow Microsoft's instructions) or from the Microsoft Store. You will then need to install a Linux distribution. Ubuntu is recommended; either download from the Microsoft Store or run wsl --install -d Ubuntu-22.04
in your terminal.
Familiarise yourself with:
+++⚠️ Everything you do with Backstage from this point forwards must be done in your WSL environment. Don't attempt to run Backstage from your Windows environment - it won't work!
+
You will need either Node 16 or 18 to run Backstage. It will not run on Node 20.
+The recommended way to use the correct Node version is to use nvm.
+++⚠️ If on a PC make sure you install and configure nvm in your WSL environment.
+
You will then need to install Yarn globally. Run npm install --global yarn
in your WSL environment.
Make sure you've got Git configured. If on WSL follow the steps to make sure you've got Git configured correctly - your settings from Windows will not carry over.
+Ensure you have a GPG key set up so that you can sign your commits. See the guide on verifying Git signatures. If you have already set up a GPG key on Windows this will need to be exported and then imported in to your WSL environment.
+To export on Windows using Kleopatra, see here. To import using gpg on WSL, see here.
+If installing WSL for the first time you will likely need to install the build-essential package. Run sudo apt install build-essential
.
Check if you have the Azure CLI installed in your WSL environment. Run az --version
. If this returns an error you need to install the Azure CLI: curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
. See Install the Azure CLI on Linux.
We have integrated Backstage with Azure AD for authentication. For this to work you will need to sign in to the O365_DEFRADEV tenant via the Azure CLI.
+After installing and configuring pre-requisites we can clone the adp-portal repo, configure Backstage, and run the application.
+++⚠️ Remember, if on Windows these steps must be followed in your WSL environment.
+
If you haven't already, create a folder in your Home directory where you will can clone your repos.
+Clone the adp-portal repo into your projects folder.
+Client IDs, secrets, etc for integrations with 3rd parties are read from environment variables. In the root of a repo there is a file named env.example.sh. Duplicate this file and rename it to env.sh.
+A senior developer will be able to provide you with the values for this file.
+A private key is also required for the GitHub app. Again, a senior developer will be able to provide you with this key.
+++ℹ️ Later on down the line we are hoping to move these environment variables to Key Vault
+
To load the environment variables in to your terminal session run . ./env.sh
. Make sure you include both periods - the first ensures that the environment variables are loaded into the correct context.
The application needs to be run from the /app folder - run cd app
if you're in the root of the project.
Run the following two commands to install dependencies, and build and run the application:
+ +To stop the application, press <kbd>
Ctrl </kbd>
+<kbd>
C </kbd>
twice.
If you have issues starting Backstage, check the output in your terminal. Common errors are below:
+"Backend failed to start up Error: Invalid Azure integration config for dev.azure.com: credential at position 1 is not a valid credential" - Have you loaded your environment variables? Run . ./env.sh
from the root of the repo, then try running the application again.
"MicrosoftGraphOrgEntityProvider:default refresh failed, AggregateAuthenticationError: ChainedTokenCredential authentication failed" - have you logged in to the Azure CLI? Run az login
and make sure you sign in to the O365_DEFRADEV tenant. Try running the application again.
Catalog data is pulled in from multiple sources which are configured in the app-config.yaml
file. Individual entities are defined by a YAML file.
Backstage regularly scans the DEFRA GitHub organisation for repos containing a catalog-info.yaml
file in the root of the master branch. The FFC demo services contain examples of this file (see ffc-demo-web). New components scaffolded through Backstage will be contain this file (but it may need further customisation), existing components will need to have the file added in manually.
A catalog-info.yaml
for a component file might look like this:
The Backstage documentation describes the format of this file - it is similar to a Kubernetes object config file. The key properties we need to set are:
+github.com/project-slug
is used to pull data from the specified project into the Pull Requests dashboard; the dev.azure.com
annotations pull pipeline runs into the CI/CD dashboard; sonarqube.org/project-key
pulls in Sonarcloud metrics for the specified project.frontend
(for a web application) and backend
(for an API or backend service).If a component consumes infrastructure such as a database or service bus queue then that must also be defined alongside the component. Multiple entities can be defined in a single file by using a triple dash ---
to separate them.
The minimum permissions that we requires for our ADP GitHub App are:
+Please note:
+Repository permissions permit access to repositories and related resources.
+Repository creation, deletion, settings, teams, and collaborators.
+Repository contents, commits, branches, downloads, releases, and merges.
+Search repositories, list collaborators and access repository metadata.
+Pull requests and related comments, assignees, labels, milestones and merges.
+Organisation permissions permit access to organisation related resources.
+None required at this time
+These permissions are granted on an individual user basis as part of the User authorization flow.
+None required at this time
+GitHub Apps can request almost any permission from the list of API actions supported by GitHub Apps.
+Possible Remediations:
+Leaking or misplaced GitHub App credentials.
+Possible Remediation
+Answer: +Yes, please see permission break down above with the reasons why we require them.
+“It's not possible to have multiple Backstage GitHub Apps installed in the same GitHub organisation, to be handled by Backstage. We currently don't check through all the registered GitHub Apps to see which ones are installed for a particular repository. We only respect global organisation installs right now.”
+Answer: +Should be be able to have multiple instances of backstage within the same GitHub organisation. There may be possible conflicts that may occur with certain backstage plugins. For example, GitHub Discovery search for a catalog-info.yaml are repositories to allow for automatic registering of entities. If backstage 1 and backstage 2 use the defaults GitHub Discovery provide configuration will be pick up the same files as each other. To resolve this it will would be as simple as changing config to find yaml(s) file of different names or paths in backstage 1 or 2. There is a second option which would be to restrict what repositories the GitHub Application has access to.
+To aid in remediating this concern, will be change the config where we can to add an "adp" suffix. For example, "adp-catalog-info.yaml".
+We are not using the web hook at the moment but we may look to support GitHub events in future (Documentation).
+The ADP portal is built on Backstage, using React and Typescript on the frontend. This page outlines, the steps taken to incorporate the GOV.UK branding into the ADP Portal.
+Backstage allows the customization of the themes to a certain extent, for example the font family, colours, logos and icons can be changed following the tutorials. All of the theme changes have been made within the App.tsx file.
+In order to install GOV.UK Frontend you need to meet the some requirements:
+Once those are successfully installed you can run the following in your terminal within the adp-portal/app/packages/app folder:
+yarn install govuk-frontend --save
In order to import the GOV.UK styles, two lines of code have been added within the style.module.scss file:
+$govuk-assets-path: "~govuk-frontend/govuk/assets/";
// this creates a path to the fonts and images of GOV.UK assets.
+@import "~govuk-frontend/govuk/all";
// this imports all of the styles which enables the use of colours and typography.
The colour scheme is applied through exporting the GOV.UK colours as variables within the style.module.scss file into the Backstage Themes created. Currently there are a few colours that are being used however more variables can be added within the scss file and can be imported within other files. To import the scss file with the styles variables this statement is used in the App.tsx file:
+import styles from 'style-loader!css-loader?{"modules": {"auto": true}}!sass-loader?{"sassOptions": {"quietDeps": true}}!./style.module.scss';
++This import statement enables the scss file to load and process.
+
The style variables then were used within the custom Backstage themes:
+The font used within the ADP Portal is GDS Transport as the portal will be on the gov.uk domain.
+To get this working within the style.module.scss file the fonts were imported through assigning it to a scss variable called govuk-assets-path-font-woff2 and govuk-assets-path-font-woff:
+++As recommended we are serving the assets from the GOV.UK assets folder so that the style stays up to date when there is an update to the GOV.UK frontend.
+
Then this variable was parsed into the url of the font-face element:
+To customize the font of the backstage theme, the scss was imported (check the colour scheme section) and used within the fontFamily element of the createUnifiedTheme function:
+The Logo of the ADP Portal was changed by updating the two files within the src/components/Roots folder.
+Both DEFRA logos are imported as png and saved within the Root folder.
+ + + + + + + + + + + + + + + + +A brief description of changes being made +Link to the relevant work items: e.g: Relates to ADO Work Item AB#213700 and builds on #3376 (link to ADO Build ID URL)
+Any specific actions or notes on review?
+Any relevant testing information and pipeline runs.
+{work item number}: {title}
The project's branch policy is configured to necessitate the use of Git signed commits for any merging activities. This policy serves a twofold purpose: firstly, it validates the authenticity of changes and acts as a barrier against unauthorised or malevolent alterations to the codebase.
+Secondly, it provides assurance of code integrity by demonstrating that changes have remained unaltered throughout transit and subsequent commits. During the evaluation of pull requests or merge requests, the presence of signed commits also offers a reliable means to confirm that the proposed changes have been authored by authorised contributors, thereby reducing the likelihood of unintentionally accepting unauthorised code.
+To use signed commits, developers must generate a GPG (GNU Privacy Guard) key pair, which includes a private key kept secret and a public key that is shared. Commits are then signed using the private key, and others can verify the commits using the corresponding public key.
+ +Please refer the following link. Please make sure the email you enter in step 8 is your github email account +https://docs.github.com/en/authentication/managing-commit-signature-verification/generating-a-new-gpg-key
+https://docs.github.com/en/authentication/managing-commit-signature-verification
+This documentation is to capture the existing design of demo/exemplar services in FFC platform. This will provide an overview of the components in the demo services and application flow.
+The demo service contains 6 containerized microservices orchestrated with Kubernetes. The purpose of these services are to prove the platform capability to provision the infrastructure required for developing a digital service along with CI/CD pipelines with minimal effort. This in turn allows the developers to focus on the core business logic.
+Below are the demo services that are present at the moment.
+Service | +Dev Platform | +Git Repo | +
---|---|---|
Payments Service | +Node.Js | +https://github.com/DEFRA/ffc-demo-payment-service | +
Payments Service Core | +Asp.Net Core | +https://github.com/DEFRA/ffc-demo-payment-service-core | +
Payments Web | +Node.Js | +https://github.com/DEFRA/ffc-demo-payment-web | +
Claim Service | +Node.Js | +https://github.com/DEFRA/ffc-demo-claim-service | +
Calculation Service | +Node.Js | +https://github.com/DEFRA/ffc-demo-calculation-service | +
Collector Service | +Node.Js | +https://github.com/DEFRA/ffc-demo-collector | +
Demo Web | +Node.Js | +https://github.com/DEFRA/ffc-demo-web | +
+
++ | Code | +Docker Compose | +Dev | +Test | +Pre-production | +
---|---|---|---|---|---|
Lint/Audit | +X | ++ | + | + | + |
Synk Test | +X | ++ | + | + | + |
Static Code Analysis/ SonarCloud | +X | ++ | + | + | + |
Functionional/ BDD | ++ | X | +X | ++ | + |
Intergration Tests/ Contract testing using pact broker | ++ | + | X | +X | ++ |
Performance Testing (JMeter) | ++ | + | + | + | X | +
Pen Testing (OWASP ZAP) | ++ | X | ++ | + | X | +
Azure Policy extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. Azure Policy makes it possible to manage and report on the compliance state of your Kubernetes clusters from one place. The add-on enacts the following functions:
+Azure Policy for Kubernetes supports the following cluster environments:
+https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes +https://learn.microsoft.com/en-us/azure/container-apps/dapr-overview?tabs=bicep1%2Cyaml
+TODO
+This page is a work in progress and will be updated in due course.
+Add to details about each service.
+This getting started guide summarises the steps for onboarding a delivery programme onto ADP via the Portal. It also provides an overview of the automated processes involved.
+Before onboarding a delivery programme you will first need to ensure that:
+By completing the steps in this guide you will be able to:
+Once you have navigated to the 'ADP Data' page you will be presented with the 'Delivery Programmes' option. + +By clicking 'View' you will have the ability to view existing Delivery Programmes and add new ones if you have the admin permissions. +
+You can start entering Delivery Programme information by clicking the 'Add Delivery Programme' button. + +You will be presented with various fields; some are optional. For example, the 'Finance Code', 'Website', and 'Alias' are not required, and you can add them later if you wish.
+If the Arms Length Body (ALB) for your programme has already been created it will appear in the Arms Length Body dropdown and you will be able to select it accordingly. The programme managers' dropdown should also be pre-populated, and you are able to select more than one manager.
+This form includes validation. Once you have completed inputting the Delivery Programme Information and pressed 'create', the validation will run to check if any changes need to be made to your inputs.
+Once you have created your Delivery Programme, you will automatically be redirected to the view page which will allow you to look through existing programmes and edit them. +
+ + + + + + + + + + + + + + + + +This getting started guide summarises the steps for onboarding a delivery project onto ADP via the Portal. It also provides an overview of the automated processes involved.
+Before onboarding a delivery project you will first need to ensure that:
+By completing this guide you will have completed these actions:
+Once you have navigated to the 'ADP Data' page you will be presented with the 'Delivery Projects' option. + +By clicking 'View' you will have the ability to view existing Delivery Projects and add new ones if you have the admin permissions. +
+You can start entering Delivery Projects information by clicking the 'Add Delivery Projects' button. + +You will be presented with various fields; some are optional. For example, the 'Alias', 'Website', 'Finance Code' and 'ADO Project' are not required, and you can add them later if you wish.
+If the Delivery Programme for your project has already been created it will appear in the Delivery Programme dropdown, and you will be able to select it accordingly.
+This form includes validation. Once you have completed inputting the Delivery Project Information and pressed 'create', the validation will run to check if any changes need to be made to your inputs.
+...
+...
+...
+...
+Once you have created your Delivery Project, you will automatically be redirected to the view page which will allow you to look through existing projects and edit them. +
+ + + + + + + + + + + + + + + + +This getting started guide summarises the steps for onboarding a user onto your delivery project in ADP. It also provides an overview of the automated processes involved.
+Before onboarding a user on to your delivery project you will first need to ensure that:
+By completing this guide you will have completed these actions:
+....
+...
+PostgreSQL is the preferred relational database for microservices. This guide describes the process for creating a database for a microservice and configuring the microservice to use it.
+Note
+The ADP Platform currently supports PostgreSQL only as the option available for a relational database.
+There are two ways for creating a Postgres Database in the ADP.
+When scaffolding a new Backend service using the ADP Portal. You have the option to specify the name of the database. Refer to the section on Selecting a template
+For an existing service, you can add values to the Infrastructure Helm Chart values. Refer to Infrastructure section on Database for Flexible Server
+Tip
+An example of how to specify the Helm Chart values is provided in the ffc-demo-claim-service repository, refer to the configuration in the values.yaml.
+The ADP Platform CI and deployment pipelines support database migrations using Liquibase.
+Create a Liquibase changelog defining the structure of your database available from the root of your microservice repository in changelog/db.changelog.xml
.
Guidance on creating a Liquibase changelog is outside of the scope of this guide.
+Update docker-compose.yaml
, docker-compose.override.yaml
, and docker-compose.test.yaml
to include a Postgres service and add Postgres environment variables to the microservice.
Replace <programme code>
and <service>
as per naming convention described above.
The following scripts and files are scaffolded as part of your backend service to provide a good local development experience.
+docker-compose.migrate.yaml
in the root of your microservice repository that spins up Postgres in a Docker container.scripts/
folder contains three bash scripts start
, test
and postgres-wait
.scripts/migration/
folder contains two scripts to apply and remove migrations. The two scripts are database-up
and database-up
.Execute the start
script to start the Postgres container.
Some microservices require Postgres extensions to be installed in the database. Below is the list of the enabled extensions:
+This is a two step process.
+Tip
+Request the ADP Platform Team to enable the extension on the Postgres Flexible server if it is not in the list above of enabled extensions.
+When scripting the database migrations for creating extensions, use IF NOT EXISTS
. This will ensure that the scripts can both run locally and an in Azure.
When running the Postgres database locally in Docker, you will have sufficient permissions to create the extensions. However, in Azure, the ADP Platform will apply the migrations to the database instead of using the microservice's managed identity. If you don't use IF NOT EXISTS
, the migrations on the Azure Postgres database will fail due to insufficient permissions.
Below is an example of a SQL script you can use in your migration to enable an extension.
+ + + + + + + + + + + + + + + + + +In this how to guide you will learn how to create a new Platform service on ADP for your delivery project and team. You will also learn what automated actions will take place, and any areas of support that may be needed.
+Before creating Platform business service (microservice), you will first need to ensure that you have:
+Note
+Please contact the ADP Platform Engineering team for support if you don’t have, and cannot setup/configure, these prerequisites.
+By completing this guide, you will have completed these actions:
+The following areas require the support of the ADP Platform Team for your service initial setup:
+Note
+The initial domain (Frontend service or external API URL) creation is currently done via the Platform team pipelines. Please contact the Platform team to create this per environment once the service is scaffolded.
+Tip
+You can choose a Node.js for Frontends, or for Backends and APIs in Node.Js or C#.
+Enter the properties describing your component/service:
+{programme}-{project}-{service}
. For example, fcp-grants-web.To encourage coding in the open the repository will be public by default. Refer to the GDS service manual for more information. You can select a ‘private’ repository by selecting the ‘private repo’ flag in GitHub.
+The scaffolder will create a new repository and an associated team with ‘Write’ permissions:
+CI/CD pipelines will be created in Azure DevOps:
+DefraGovUK
and not changeable.Now you have reviewed and confirmed your details, your new Platform service will be created! It will complete the actions detailed in the overview section. Once this process completes, you will be given links to your new GitHub repository, the Portal Catalog location, and your Pipelines. You now have an ADP business service!
+ +We use HELM Charts to deploy, manage and update Platform service applications and their dedicated and associated infrastructure. This is ‘self-service’ managed by the platform development teams/tenants. We use Azure Bicep/PowerShell for all other Azure infrastructure and Entra ID configuration, including Platform shared and ‘core’ infrastructure. This is managed by the ADP Platform Engineers team. An Azure Managed Identity (Workload ID) will be automatically created for every service (microservice app) for your usage (i.e. assigning RBAC roles to it).
+The creation of infrastructure dedicated for your business service/application is done via your microservice HELM Chart in your repository, and deployed by your Service CI/CD pipeline that you created earlier. A ‘helm’ folder will be created in every scaffolded service with 2 subfolders. The one ending with ‘-infra’ is where you define your service’s infrastructure requirements in a simple YAML format.
+Note
+The full list of supported ‘self-service’ infrastructure can be found in the ADP ASO Helm Library Documentation on GitHub with instructions on how to use it.
+Image below is an example of how-to self-service create additional infrastructure by updating the HELM charts ‘values.yaml’ file with what you require to be deployed:
+ +Warning
+Please contact the ADP Platform Engineers Team if you require any support after reading the provided documentation or if you’re stuck.
+A system is a label used to group together multiple related services. This label is recognized and used by backstage in order to make it clear what services interact with eachother. They are a concept which is provided by backstage out of the box, and is documented by them here
+In order to create a system, you simply need to add a new definition for it to the ADP software templates repository. There is an example system to show the format that should be used. Once this system is added, you need to add a link to it from the all.yaml file. You will also need to choose a name for your system, which should be in the format {delivery-project-id}-{system-name}-system
e.g. fcp-demo-example-system
.
Once the system has been added and the all.yaml
file has been updated, you will need to wait for the ADP portal to re-scan the repository which happens every hour. If you need the system to be available sooner than that, then an ADP admin can trigger a refresh at any time by requesting a refresh of the project-systems location.
all.yaml
file¶The all.yaml
file is what tells the ADP portal where to find the systems, and so every file containing a definition for a system must be pointed to by this file. To point to a new file, you will need to add a new entry to the targets
array which should be the relative path from the all.yaml
file to your new system file.
all.yaml
+
{system}.yaml
file¶Your system will actually be defined inside its own .yaml
file. The name of this file should be the name of the system you are creating to make it easier to track which system is defined where. The format of this file should follow this example:
my-system.yaml
+
In this how to guide you will learn how to build, deploy, and monitor a Platform service (Web App, User Interface, API etc) for your team. It includes information about Pipelines specifically and how the ADP Backstage Portal supports this.
+Before building and deploying a service, you will first need to ensure that:
+ +By completing this guide, you will have completed these actions:
+All pipelines in ADP are created in your projects/programmes Azure DevOps project. This is specific to your team. It’s the one you chose on your scaffolder creation of a service. We use YAML Azure Pipelines and Defra GitHub to store all code. +Pipelines are mapped 1-1 per microservice, and can deploy the Web App, Infra, App Configuration and Database schema together as an immutable unit.
+In your scaffolded repository:
+<your-service-name>
. <projectcode>-<servicename>-api
Above image an example of a Pipeline scaffolded called ‘adp-demo99’ in the DEMO folder.
+Can I find this in the ADP Portal?
+Yes! Simply go to your components page that you scaffolded/created via the ADP Portal, and click on the CI/CD tab, which will give you information on your pipeline, and will link off to the exact location.
+We promote continuous integration (CI) and continuous delivery (CD). Your pipeline will trigger (run the CI build) automatically on any change to the ‘main’ branch, or any feature branch you create and anytime you check-in. This includes PR branches. You simply run your pipeline from the ADO Pipelines interface by clicking ‘Run pipeline’.
+You can:
+Pipeline documentation and parameters and configuration options can be found here.
+ +Above image of pipeline run example.
+You can change some basic functionality of your pipeline. A lot of it is defined for you in a convention-based manner, including the running of unit tests, reporting, environments that are available etc, and some are selectable, such as build of .NET or NodeJS apps, location of test files, PR and CI triggers, and the parameters to deploy configuration only or automatic deploy on every feature build. +Full details can be found on the Pipelines documentation GitHub page. +
+Above image is an example of what can be changed in terms of Pipeline Parameters (triggers, deployment types, paths to include/exclude). +The below image is an example of what can be changed. You can change things like your config locations, test paths, what ADO Secret variable groups you wish to import, what App Framework (Node or C#) etc. +
+To promote your code through environments, you can use the Azure Pipelines user interface for your team/project to either:
+Your environments and any default gates or checks will be automatically plotted for you. This is an example of a full pipeline run. You can select, based on the Platform route-to-live documentation, which environments you promote code to. You don’t need to go to all environments to go live.
+ +This is an example of a waiting ‘stage’ which is an environment:
+ +To promote code, you can select ‘Review’ in the top-right hand corner and click approve.
+Full Azure Pipelines documentation can be found here.
+Every pipeline run includes steps such as unit tests, integration tests, acceptance tests, app builds, code linting, static code analysis including Sonar Cloud, OWASP checks, performance testing capability, container/app scanning with Snyk etc.
+We report out metrics in Azure DevOps Pipelines user interface for your project and service, for things like Unit Test coverage, test passes and failures, and any step failures. Full details are covered in Pipelines documentation. Details can also be found in your projects Snyk or Sonar Cloud report.
+From the ADP Backstage Portal, you can find the following information for all environments:
+The portal is fully self-service. And each component deployed details the above. You should use the ADP Portal to monitor, manage and view data about your service if it isn’t included in your Pipeline run.
+ + + + + + + + + + + + + + + + +In this how to guide you will learn how to create, deploy, and run an acceptance test for a Platform service (Frontend Web App or an API) for your team.
+Before adding acceptance tests for your service, you will need to ensure that:
+ +By completing this guide, you will have completed these actions:
+These tests may include unit, integration, acceptance, performance, accessibilty etc as long as they are defined for the service.
+Note
+The pipeline will check for the existence of the file test\acceptance\docker-compose.yaml
to determine if acceptance tests have been defined.
You may add tags to features and scenarios. There are no restrictions on the name of the tag. Recommended tags include the following: @sanity, @smoke, @regression +refer
+If custom tags are defined, then the pipeline should be customized to run those tests as detailed in following sections.
+$ENV:TEST_TAGS = "@sanity or @smoke"
export TEST_TAGS = "@sanity or @smoke"
Note
+Every pipeline run includes steps to run various post deployment tests. +These tests may include unit, integration, acceptance, performance, accessibilty etc as long as they are defined for the service.
+You can customize the tags and environments where you would like to run specific features or scenarios of acceptance test
+If not defined, the pipeline will run with following default settings.
+Please refer ffc-demo-web pipeline:
+Test execution reports will be available via Azure DevOps Pipelines user interface for your project and service.
+ + + + + + + + + + + + + + + + +In this how to guide you will learn how to create, deploy, and run a performance test for a Platform service (Web App, User Interface etc) for your team.
+Before adding acceptance tests for your service, you will need to ensure that:
+ +By completing this guide, you will have completed these actions:
+Note
+Every pipeline run includes steps to run varoious tests pre deployment and post deployment. These tests may include unit, integration, acceptance, performance, accessibilty etc as long as they are defined for the service.
+The pipeline will check for the existence of the file test\performance\docker-compose.jmeter.yaml
to determine if performance tests have been defined.
The Performance Test scripts should be added to the test\performance
folder in the GitHub repository of the service. Refer to the ffc-demo-web example. This folder should contain a docker-compose.jmeter.yaml
file is used to build up the docker containers required to execute the tests. As a minimum, this will create a JMeter container and optionally create Selenium Grid containers. Using BrowserStack is preferred to running the tests using Selenium Grid hosted in Docker containers because you get better performance and scalability as the test load increases.
Executre the above commands in bash or PowerShell
+You can modify the number of virtual users, loop count and ramp-up duration by changing the settings in the file perf-test.properties.
+You can then reference these variables in your JMeter Script.
+ + +You can customize the environments where you would like to run specific features or scenarios of performance test
+ +if not defined, the pipeline will run with following default settings
+Details on migrating your existing project and its services to ADP.
+ADP Portal Setup:
+For each of your platform services you now need to migrate them over to ADP and create the needed infrastructure to support them. Link to guide.
+Once all services/ infrastructure are created and verified in SND3 (O365_DefraDev), will be begin the process of pushing the services/ infrastructure to environment in the DEFRA tenant, DEV1, TST½, and PRE1. Once deployment is complete and tested in lower we will be able to progress to PRD1 ensure that the DEFRA release management progress is adhered to.
+Developers of the delivery project actively learning and using the platform to develop new features.
+As a new developer we recommend starting at "Why ADP" to under the platforms benefits and challenges.
+Near to completion of the migration before the service goes live on ADP. Data from the old production environment will need to be moved into data services wih in ADP that was created as part of the infrastructure migrations stage. In order for this stage to go smoothly the old production traffic will need to be stopped in order to stop the flow of traffic in to the old data services. Allowing the data of old services to be transferred into the selected ADP data services.
+Depending on the selected data service it will require different methods to transfer data between production environment which is detailed in "migrate-production-data".
+...
+Migration complete continue to
+ + + + + + + + + + + + + + + + +In this guides you will learn how to migrate your existing service to ADP.
+In this guide you will learn how to migrate your existing service to ADP.
+ + + + + + + + + + + + + + + + +Overview of the ADP Copilot, a tool that provides a conversational interface to the Azure Development Platform (ADP). It outlines the features and capabilities of the ADP Copilot, such as the ability to interact with the ADP Portal, Azure DevOps, and GitHub using natural language. It describes how the ADP Copilot can be used to create, manage, and monitor resources in Azure, Azure DevOps, and GitHub, as well as how it can be used to automate tasks and workflows. The ADP Copilot is designed to streamline the development process and improve collaboration between team members by providing a unified interface for interacting with the ADP Platform.
+The ADP Copilot provides the following key features:
+The ADP Copilot is built using the following components:
+main
branch to ADP External & Internal Documentation.Example of the metadata stored in formatter of the documentation:
+Info
+All of these formatter fields are required for the documentation to be indexed correctly.
+Uk South Azure OpenAI API used to process the natural language queries made by the user. This restricts which models can be used and the amount of data that can be processed.
+GPT-4-turbo
: Used to process the natural language queries made by the user. ADP will also experiment with other models like GPT 3.5 turbo
.text-embedding-ada-002
: Used vectorized and index the ADP Documentation to provide search capabilities. The preferred model would be text-embedding-3-large
due to its capabilities but it is not available in any UK region.Azure AI Search Index used to store the vectorized and indexed ADP Documentation. There is current only one index used and requires no indexer to populate the index dye to a script that updates the index in the Azure Pipeline:
+Index fields:
+Field Name | +Type | +Retrievable | +Filterable | +Sortable | +Facetable | +Searchable | +Description | +
---|---|---|---|---|---|---|---|
id | +String | +Yes | +Yes | +No | +No | +No | +Unique identifier of the document | +
content | +String | +Yes | +No | +No | +No | +Yes | +Content of the document | +
content_vector | +SingleCollection | +Yes | +No | +No | +No | +Yes | +Vector representation of the document content | +
title | +String | +Yes | +No | +No | +No | +Yes | +Title of the document | +
source | +String | +Yes | +Yes | +No | +No | +No | +Source of the document | +
uri | +String | +Yes | +Yes | +No | +No | +No | +URI of the document | +
last_update | +DateTimeOffset | +Yes | +Yes | +No | +No | +No | +Last update timestamp of the document | +
summary | +String | +Yes | +No | +No | +No | +No | +Summary of the document | +
repository | +String | +Yes | +Yes | +No | +No | +No | +GitHub repository of the document | +
metadata | +String | +Yes | +No | +No | +No | +Yes | +full metadata of a document | +
Azure Cosmos DB used to store the chat history of the user interactions with the ADP Copilot. This is used to provide a history of the interactions for an ADP user and to improve the AI orchestration used and auditing requirements ADP Copilot.
+Example:
+The ADP Copilot is currently in the development stage and is being built in the following stages:
+Note
+We are using the Intelligent Application Lifecycle to develop the ADP Copilot.
+Explore: Proof of Concept
+Build & Augment
+Improve & Optimise
+TBC
+ + + + + + + + + + + + + + + + +The table below details the roles and permissions that can be used for manual testing of the ADP portal and specific actions
+Name | +Role | +Permissions | +
---|---|---|
adptestuser1 | +Portal User | +No permissions but has portal access (Added to portal users group) | +
adptestuser2 | +Team Member | +Non-tech team member of the Delivery Project team | +
adptestuser3 | +Admin Tech Team Member | +Admin tech team member of the Delivery Project team | +
adptestuser4 | +Programme Admin | +Delivery Programme Admin of a programme | +
adptestuser5 | +Admin and Tech Member | +Admin for project A and also tech member for Project B | +
TODO
+This page is a work in progress and will be updated in due course.
+This article details the Service Application and Hosting Architecture for the solution at a high level. It will detail some of the initial decisions made and reasonings.
+This ADR is to record App Hosting services.
+Context -
+Application Hosting is a key part of building and delivering scalable and secure microservices for business services.
+TL;DR-
+ADP will build upon multiple Containerized Hosting options within Azure. Primarily, this will focus on Azure Kubernetes Service (AKS) to orchestrate, scale and run business services in a transparent manner. AKS has been chosen as a primary toolchain because of it's scalability, security and orchestration capabilities. Secondary options that are being continually evaluated and tested include Azure Container Apps (secondary) and Azure Functions (trigger or schedules).
+All applications that run be must containerized and the default choice is AKS.
+Requirements-
+Decision -
+Primary:
+Secondary:
+Azure Keyvault supports public CA integration with DigiCert, this is a fully Microsoft managed CSR submission and approval process, this will be used for "Back end" certificates in all environments. https://dev.azure.com/defragovuk/DEFRA-DEVOPS-COMMON/_git/Defra.Certificate.Renewals
+Approval-
+Platform Architecture
+Azure Service Operator (ASO) allows you to deploy and maintain a wide variety of Azure Resources using the Kubernetes tooling you already know and use, i.e. HELM, YAML configuration files and FluxCD or Kubectl.
+Instead of deploying and managing your Azure resources separately from your Kubernetes application, ASO allows you to manage them together, automatically configuring your application as needed. For example, ASO can set up your Redis Cache or PostgreSQL database server and then configure your Kubernetes application to use them.
+What is the Platform's goal here?
+To enable as much developer self-service as possible for 80% of common developer scenarios when building business services.
+Can I have an example diagram? - Sure! have one from Microsoft.. article here...
+ +Currently, we have an approach where developers will use HELM Charts and FluxCD to self service deploy their applications to AKS in a GitOps manner securely. It is a developer-driven CI and CD processes without Platform team involvement for the majority of the work. Common HELM libraries are their to support you as well. With the addition of ASO, this expands to Azure infrastructure too (storage, queues, identities) outside of AKS that supports your applications.
+If ASO is not appropriate for the scenario or the component isn't supported, we can use our platform-native fallback: Azure Pipelines with Azure Bicep templates and/or PowerShell and CLI scripts. It's important to remember that Bicep and our supporting scripts are our bedrock, defined in such a way that allows for a scalable approach to manage a Service team's dedicated infrastructure. But it requires deeper context of Azure, the configuration and a good understanding of Bicep or PowerShell etc.
+With the dual track approach, we can scale and support a wide variety of teams, and have fallback options and Azure native tooling to ensure we can continue to deliver applications to production following assured processes.
+Any other tools exist to compete with ASO?
+Azure Component | +ASO Support? | +MVP? | +Self-Service? | +Manage base component? | +Description & considerations | +
---|---|---|---|---|---|
Resource Groups | +Y | +Y | +Y | +Y | +RG write on Sub only. | +
PostgreSQL Database | +Y | +Y | +Y | +N | +Database write/read on Postgre only. Workload IDs assigned to specific per-service DBs. | +
Managed Identities | +Y | +Y | +Y | +Y | +Can create and assign RBAC | +
Storage Accounts | +Y | +Y | +Y | +Y | +Can create & manage, with templated support for networking & config | +
Service Bus (Queues/Topics) | +Y | +Y | +Y | +N | +Can only create/read: Queues, Topics, Subscriptions, and RBAC on specific entities. | +
Authorization (RBAC on components) | +Y | +Y | +Y | +N | +RBAC on designated components within Subscription. | +
Azure Front Door | +Y | +N | +Y | +N | +TBD: allow self-creation of DNS entries, domains and routes to designated Cluster. | +
It is safe to assume, if it's not listed, it's not ASO supported and will be directly managed via Bicep templates, modules and PWSH Scripts.
+Further to this, you will not be able to fully manage the lifecycle of some resources, i.e. Service Bus or the PostgreSQL Flexible Server. This is by design as it's a Platform responsibility.
+We simply don't know at this stage. It is in trial mode and our approach may differ as we expand, learn more and grow as a Platform.
+We have setup ASO in a Single Operator mode with a multi-tenant configuration, enabling the use of separate credentials for managing resources in different Kubernetes namespace.
+Azure Service Operator supports four different styles of authentication today.
+Azure-Workload-Identity authentication (OIDC + Managed Identity) is being used by ASO in ADP.
+Each supported credential type can be specified at one of three supported scopes:
+Global - The credential applies to all resources managed by ASO. +Namespace - The credential applies to all resources managed by ASO in that namespace. +Resource - The credential applies to only the specific resource it is referenced on. +When presented with multiple credential choices, the operator chooses the most specific one: resource scope takes precedence over namespace scope which takes precedence over global scope.
+ASO in ADP is using Namespace scoped credentials. Each project team will have an ASO secret in their own namespace linked to a Managed Identity which will only have access to Azure resources the team should be allowed access to.
+The Platform Team will have their own ASO credential scoped at the Subscription level with Contributor and UAA access. This will allow the Platform Team to create Project Team resources using ASO.
+ +The Platform Team will create the following resources using ASO to onboard a Project Team:
+We still need to work out how to inject certain values automatically into the ASO Kubernetes secrets managed by Flux. These are currently being added manually as post deployment step.
+The values we need to pass in for the Platform Team secret are:
+The values we need to pass in for the Project Team secret and Managed Identity Federated credential are:
+The below story has been raised to capture the above requirements: +https://dev.azure.com/defragovuk/DEFRA-FFC/_workitems/edit/248355
+We have also created a configmap which is manually installed on AKS in SND1. We did this so we didn't have these variables visible in our public repo. This will also need to be automated.
+We are an Internal Development Platform that supports the delivery and maintenance of business services and applications for a variety of consumers. As part of this, we have a range of Common Pipelines that both Platform and Delivery teams can use to build and deploy their applications to all environments.
+As a business service using the Azure Developer Platform (ADP), you are defined as a Platform Tenant. That means your 'service' or 'product' is deployed onto the ADP and follow certain standards and conventions to expedite delivery.
+Infrastructure is layered into the following levels:
+Each level builds upon each other. So that means the Bootstrapping comes before the Core, and the Core before the Products are delivered. Finally, Service level, is the smallest deployment on the Platform and focuses on HELM (FuxCD) deployments and any DB Schema upgrades if required. We like to call these our continuous-delivery pipelines.
+In each environment, there will be exactly one set of all platform-level (Core & Bootstrap) infrastructure and exactly one set of each of the product-level infrastructure configurations are deployed. Finally, the Service-level are added as the most granular. Taken together, these infrastructure levels fully constitute an application / business service.
+The Services deployment focuses on the HELM Chart Deployments, using FluxCD. This is a GitOps approach to application deployments and may contain database schema upgrades where relevant. This design can be found here. We also use this pipeline to deploy ADP Shared Services onto the AKS Cluster.
+
The following diagram shows the deployment layers, and the types of infrastructure that might be found in a given environment including the services.
+
+
Pipeline | +Description | +Supported Components | +
---|---|---|
Bootstrap | +Contains pipelines used for bootstrapping e.g. setting up service connections | +VNET, Service Connections, Service Principals | +
Core | +Contains pipelines used for install the ADP Platform Core resources e.g. AKS. These are all shared Platform components used by platform tenants. | +AKS Cluster, Networking (Vnet, NSGs, Subnets, UDRs), Key Vaults & config, Service Bus Core, Azure Front Door, Platform Storage Accounts, PostgreSQL Flexible server, Cosmos DB, Azure Redis Cache, App Insights, Log Analytics, Container Registry, AAD Configuration (App Registrations, AAD Groups etc.), App Configuration Service Core, Managed Identities | +
Product | +Contains pipelines used for onboarding & deploying services onto the ADP Platform (i.e. their infrastructure components) | +Service Bus - Queues, Topics & Subscriptions, Managed Identities & RBAC, Service Databases, Front Door DNS Profiles (public and private), App Configuration Key-Value Pairs, KeyVault Key-Value pairs, App Registrations & AAD Groups, Service Storage Accounts (& tables/containers, etc.) | +
Services | +Contains Service pipelines for deploying into AKS Cluster with FluxCD (GitOps Pipelines) | +Service & ADP HELM (Application) Deployments with FluxCD, Database Migrations (Liquibase?) | +
Design Principles +- Application repositories will contain the application source code, docker files and helm charts to deploy the application.
+HelmRelease
.
+- All GitOps environments should use the main branch. Approaches such as a branch per environment have downsides and should be avoided.
+- Each environment will have an environment-specific Azure Container Registry that the CI pipeline will push the docker images and artifacts (helm charts etc) to.
+CI/CD pipelines should support both continuous deployment and continuous delivery.
+ + + + + + + + + + + + + + + + +Context: These are the findings from the 'Build Deployment completion Trigger' Spike. The goal of the findings is to be able to Execute Post deployment tests when Flux CD has completed deploying a new application.
+ +We can utilize the flux notification controller to dispatch events to external systems (Azure Function, Azure Event Hub, Slack, Teams) https://fluxcd.io/flux/components/notification/
+Reference: +https://github.com/fluxcd/notification-controller/blob/main/docs/spec/v1beta2/providers.md#sas-based-auth
+Using SAS based authentication:
+Create a secret containing the shared access key:
+kubectl create secret generic webhook-url --from-literal=address="Endpoint=sb://events-hub-adp-poc.servicebus.windows.net/;SharedAccessKeyName=flux-notifications;SharedAccessKey=an1ZOt9v90oycqy67rbcnEoXaIGecBLAH+AEhD/vy1g=;EntityPath=flux-events"
Create a Provider resource for Event Hub: +
Create an Alert resource for the type of Alerts we want to monitor: +
To make ADO pipeline wait for Flux deployment completion we can utilize the AzureFunction@1 task. This will allow us to call an azure function asynchronously. The function can then poll a database ( or queue? ) and when a HelmRelease completion entry appears for the service, it will call back to the pipeline to continue:
+Example AzureFunction@1 yaml:
+Example of Function App that uses the callback completion mode
+GitOps with Flux v2 is enabled as a cluster extension in Azure Kubernetes Service (AKS) clusters.
+The microsoft.flux
cluster extension is installed using a bicep template aks-cluster.bicep
and an ADP Infra Pipeline platform-adp-core
.
Below is a snippet of the code for enabling Flux on AKS
+After the microsoft.flux
cluster extension has been installed, create a fluxConfiguration
resource that syncs the Git repository source adp-flux-core
to the cluster and reconcile the cluster to the desired state. With GitOps, the Git repository is the source of truth for cluster configuration and application deployment.
The Flux configuration links Flux to the ADP Flux Git repository and defines:
+Refer to the documentation for the Flux repositories structure for details of the two Flux repositories (adp-flux-core
and adp-flux-services
) and folder structures.
The Flux Configuration has three Kustomizations
+Kustomization | +Path | +Purpose | +
---|---|---|
cluster | +./clusters/ |
+Cluster level configurations e.g. Flux Controllers, CRDs | +
infra | +./infra/ |
+Core Services e.g. Nginx Plus Depends on the cluster Kustomization |
+
services | +./services/ |
+Business applications Depends on the services Kustomization |
+
++The Kustomizations have been configured with dependencies to ensure the Flux deployments are done in the correct sequence, starting with the
+Cluster
Kustomization, followed byInfra
and lastlyServices
.
Below is a snippet of the Kustomizations configuration from aks-cluster.bicep
and aks-cluster.parameters.json
Although we have two flux Git repositories, we are using a single Flux customisation because we cannot set dependencies at the Flux Customisation level. Instead, we have a single Flux Customisation with three Kustomizations that will be deployed in sequential order Cluster > Infra > Services
.
The Services Flux configuration contains a GitRepository and a Kustomization file that points to the Services Flux git repostory adp-flux-services
using the path ./services/environments/<environment>/01
GitOps is a set of principles for operating and managing a software system. Below are the principles of GitOps
+According to GitOps principles, the desired state of a GitOps-managed system must be:
+You can use either Flux and Argo CD as GitOps operators for AKS. Both are Cloud Native Computing Foundation (CNCF) projects that are widely used. Refer to the Microsoft documentation GitOps for Azure Kubernetes Service.
+Flux V2 is the GitOps operator that will be implemented for the ADP Platform.
+Draft - Further discussion required on step 4
+ +The CI pipeline uses either ADO Yaml Pipelines or Jenkins for build. For deployment, it uses Flux as the GitOps operator to pull and sync the app. The data flows through the scenario as follows:
+++Updating the versions of the application in the Helm chart can be done automatically by the CI or manually by the developer before raising the PR.
+
Refer to the Wiki page Application Deployments
+Flux V2 will be used to bootstrap the baseline configuration of each cluster. The baseline configuration will comprise of the core services e.g.
+ + + + + + + + + + + + + + + + + +The section below describes how the repositories have been set to handle multiple environments and teams with Flux V2.
+There are different ways of structuring repositories as described in the Flux documentation.
+Two repositories were created to ensure the separation of core infrastructure and application deployments and manifests.
+Repository | +Purpose | +
---|---|
adp-flux-core | +This will be the main repository used for configuring GitOps.`` It will contain the manifests to deploy the core services to the AKS Cluster e.g. for secrets management | +
adp-flux-services | +This is a secondary repository that will be referenced by GitRepositories from the main repository adp-flux-core |
+
The adp-flux-core contains the following top directories
+Kustomizations
are used to deploy the core services, such as Nginx Plus. The base folder for each environment will contain a list of the core services defined in the core
folder that should be deployed for a specific environment. Folders 01/02 represent cluster instances within an environment. These overlay folders 01/02 contain environment specific Kustomize patches that contain the environment specific settings.HelmRepository
and HelmRelease
) CRD manifests used for installing the core servicesGitRepository
and Kustomization per environment that points to a path in the adp-flux-services repository. For example, the Kustomization
for snd/01
will to point to the path services/environments/snd/01
in the adp-flux-services repository.You can use the markdown generator tool to update the above folder structure
+The adp-flux-core contains the following top directories
+Below is a description of the subfolders inside the services folder.
+Subfolder | +Purpose | +
---|---|
base | +the base folder contains manifests that are common to each tenant e.g. namespace, ResourceQuota and ServiceAccount. These manifests are generic, in that they have variables that can be specified at the time of onboarding. | +
environments | +This contains the environment subfolders each containing base and overlays folders. Overlays are used to minimise duplication.Kustomizations are used to deploy the business services |
+
tenants | +These are the application teams | +
You can use the markdown generator tool to update the above folder structure
+ + + + + + + + + + + + + + + + +The Platform core pipeline will be able to deploy all components (resources) that is within the defined environment. The pipeline will be able to deploy resources based on the resource category and/or resource type. This article describes the structure for the deployment of the resources based on ADO Pipelines and associated YAML files.
+Prior to this enhancements, when deploying or testing a resource, developers will comment out some parts of codes within the deploy-platform-core.yaml file in order to deploy only the relevant templates. The other approach is to deploy the entire resources which takes more time and slows down processes. This latter approach may also come at an extra cost.
+Wih the enhancement, developers can deploy resources based on the below categories: + - All + - Network - All + - Network - VNET + - Network - NSGs + - Network - Route Tables + - Monitoring - All + - Monitoring - Workspace + - Monitoring - Insights + - Monitoring - Azure Monitor Workspace + - Monitoring - Grafana + - Managed Cluster - All + - Managed Cluster - Private DNS Zone + - Managed Cluster - Cluster + - Front Door - All + - Front Door - Core + - Front Door - WAF + - Application - All + - Application - App Configuration + - Application - Apps Container Registry + - Application - PostgreSql Server + - Application - Redis Cache + - Application - Storage Account + - Application - Service Bus + - Application - Key Vault
+Deployment process +On ADO pipeline, the above category of resources can be selected in a dropdown menu. All resources can be deployed when the option 'All' is selected. This option will deploy all components of Network, Monitoring, Managed Cluster, Front Door and Application. +However, if a developer or tester needs to deploy only network resources, the selection will be 'Network - All'. With this option, all the network resources (VNET, NSGS and Route Tables) will be deployed.
+Similarly, if the deployment is for testing a specific resource within network category, for example, VNET, they will be able to do so by selecting 'Network -VNET' option from the dropdown list. This will only deploy VNET template as defined in the yaml file.
+Implementation approach +The categorisaton for resources have been applied with the use of 'parameters' defined within the deploy-platform-core-yaml file. The default value is 'All'.
+Furthermore, 'variables' have been used to define each of the values listed in the 'parameters' section.
+Conditional statements are subsequently applied to filter the resources in the 'groupedTemplates'.
+ + + + + + + + + + + + + + + + +All pipelines on the Platform will have a specific set of naming conventions to ensure consistency, standardization and readability. This article describes the naming conventions of ADO Pipelines and associated YAML files.
+platform-<project>
-<purpose>
For the ADO Pipelines: 'Platform' highlights the fact it is a Platform pipeline. 'Project' is always ADP for the Platform. 'Purpose' is typically either: core, bootstrap, product or service. For the YAML files, we try and maintain the same naming convention, except all files should be prefixed with: 'deploy' or 'ci'.
+Based on the types of pipelines and their purposes, the following naming conventions have been identified for ADO (UI Display Name):
+Core Infrastructure
+Products
+Services
+<br>
The ADO Pipelines will all be created in the existing DEFRA-FFC ADO project under the ADP Subfolder.
+Folder | +Description | +Pipelines | +YAML Pipelines | +
---|---|---|---|
Bootstrap | +Contains pipelines used for bootstrapping e.g. setting up service connections | +platform-adp-bootstrap-serviceconnections | +deploy-platform-bootstrap-serviceconnections | +
Core | +Contains pipelines used for install the ADP Platform Core resources e.g. AKS. These are all shared Platform components used by platform tenants. | +platform-adp-core | +deploy-platform-core | +
Product | +Contains pipelines used for onboarding & deploying services onto the ADP Platform (i.e. their infrastructure components) | +platform-adp-products (not implemented yet) | +deploy-platform-products | +
Services | +Contains Service pipelines for deploying into AKS Cluster with FluxCD (GitOps Pipelines) | +platform-adp-services (not implemented yet) | +deploy-platform-services | +
Core infrastructure will reside within GitHub in ADP Infrastructure. There are planned to be other infrastructure and FluxCD repos, of which the design is currently in progress. It is proposed that the infrastructure that is dedicated for our tenant services (products) will reside with ADO-Infrastructure-Services*.
+All infrastructure will be within the 'infra' folder. 'Core' designates the Platform Core Shared Infrastructure that is used by all the platform projects & services (tenants). 'Services' designates that the folder contains only infrastructure and configuration dedicated to that project/service (tenant).
+Each Module instantiation file will be within it's own folder, broken down with the following conventions below (as per the Bicep Module registry convention):
+<module-name>
/<module-name>
.<extension>
TODO
+This page is a work in progress and will be updated in due course.
+This document details the findings of Istio Service Mesh, including some features and integration with Flux.
+An Istio service mesh is logically split into a data plane and a control plane.
+The data plane is composed of a set of intelligent proxies (Envoy) deployed as sidecars. These proxies mediate and control all network communication between microservices. They also collect and report telemetry on all mesh traffic.
+The control plane manages and configures the proxies to route traffic.
+The following diagram shows the different components that make up each plane:
+ +Reference: Istio Architecture
+Because we are using Nginx as our ingress controller, the following document was referenced to code the installation in adp-flux-core and adp-flux-services. +NGINX Ingress Controller and Istio Service Mesh
+ +Istio automatically configures workload sidecars to use mutual TLS when calling other workloads. By default, Istio configures the destination workloads using PERMISSIVE mode. When PERMISSIVE mode is enabled, a service can accept both plaintext and mutual TLS traffic. In order to only allow mutual TLS traffic, the configuration needs to be changed to STRICT mode.
+Reference: MTLS
+ + + +Here is an example of applying STRICT mtls at the namespace level:
+Circuit breaking is an important pattern for creating resilient microservice applications. Circuit breaking allows you to write applications that limit the impact of failures, latency spikes, and other undesirable effects of network peculiarities.
+Reference: Circuit breaking
+Istio uses DestinationRule
to configure circuit breakers.
Here is a sample DestinationRule
with circuit breaker rules:
While Envoy sidecar/proxy provides a host of failure recovery mechanisms to services running on Istio, it is still imperative to test the end-to-end failure recovery capability of the application as a whole. Misconfigured failure recovery policies (e.g., incompatible/restrictive timeouts across service calls) could result in continued unavailability of critical services in the application, resulting in poor user experience.
+Istio enables protocol-specific fault injection into the network, instead of killing pods, delaying or corrupting packets at TCP layer. Our rationale is that the failures observed by the application layer are the same regardless of network level failures, and that more meaningful failures can be injected at the application layer (e.g., HTTP error codes) to exercise the resilience of an application.
+Operators can configure faults to be injected into requests that match specific criteria. Operators can further restrict the percentage of requests that should be subjected to faults. Two types of faults can be injected: delays and aborts. Delays are timing failures, mimicking increased network latency, or an overloaded upstream service. Aborts are crash failures that mimic failures in upstream services. Aborts usually manifest in the form of HTTP error codes, or TCP connection failures.
+Reference: Fault Injection, https://imesh.ai/blog/traffic-management-and-network-resiliency-with-istio-service-mesh/
+)
+Reference: https://kiali.io/ , https://istio.io/latest/docs/ops/integrations/kiali/
+Reference: https://www.jaegertracing.io/ , https://istio.io/latest/docs/tasks/observability/distributed-tracing/jaeger/
+It is possible to have a multi-cluster setup for Istio. https://istio.io/latest/docs/setup/install/multicluster/ .
+Our setup is slightly different than the instructions because we are using Niginx Ingress Controller, so we will have to investigate how to get Multi-Cluster setup with Nginx and Istio.
+This document details the example microservice architecture that is being developed on the Platform. It will detail the logic and physical separation between Clusters, Node Pools, Nodes, Namespaces and Pods and how they map to projects and programmes.
+The below diagram generally illustrates these requirements and separation. The namespaces provide the Logical Separation, and the separate Clusters provide Physical Separation. An example service is illustrated below, with the types of resources that can be deployed, how they integrate with the Hub/Spoke networking and Egress through the outbound firewall - CCoE Managed Palo Alto's.
+ + + + + + + + + + + + + + + + + +Azure Monitor for containers now includes recommended alerts. These preconfigured metrics alerts enable monitoring the system resources when they are running on peak capacity or hitting failure rates.
+ +Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters.
+++Container insights provides preconfigured alert rules so that we will use those as starting point...
+Container insights in Azure Monitor now supports alerts based on
+Prometheus metrics
, and metric rules will be retired on March 14, 2026. If you already use alerts based on custom metrics, you should migrate to Prometheus alerts and disable the equivalent custom metric alerts. As of August 15, 2023, you will no longer be able to configure new custom metric recommended alerts using the portal.
Metric alert rules in Container insights (preview)
+Prometheus alert rules use metric data from your Kubernetes cluster sent to Azure Monitor managed service for Prometheus.
+++Enable Prometheus Alert Rules by deploying the community and recommended alerts using the Bicep template. Follow the README.md file in the same folder for how to deploy. +https://github.com/Azure/prometheus-collector/blob/main/AddonBicepTemplate/AzureMonitorAlertsProfile.bicep
+
The tutorial below specifies how you can configure the alertable metrics in ConfigMaps.
+Flux Alerts are configured to notify teams about the status of their GitOps pipelines.
+The Flux controllers emit Kubernetes events whenever a resource status changes. You can use the notification-controller to forward these events to Slack, Microsoft Teams, Discord and others. The notification controller is part of the default Flux installation
+The following alerts will be configured for the following scenarios:
+This section details how the AKS Prometheus logs were enabled via Automation. The following documents were referenced:
+ +https://github.com/slavizh/BicepTemplates/blob/main/monitor-prometheus/aks-resources.bicep
+These are the steps that were carried out:
+The 'Monitoring Data Reader' role was given to the Grafana system assigned identity on the Azure Monitor Workspace, so Grafana can query metrics. Bicep Template
+A Data Collection Rule Association was created between the AKS Cluster and the Azure Monitor Workspace. Bicep Template
+The default metrics prometheusRuleGroups provided by Microsoft were added to the automation in order to populate the Dashboards in Grafana. Bicep Template
+The azureMonitorProfile metrics were enabled in the AKS Bicep Module Bicep Template
+Prometheus Log Retention +Managed Prometheus includes 18 months of data retention. This is included as part of the service and there is no additional charge for storage and retention.
+https://azure.microsoft.com/en-gb/updates/general-availability-azure-monitor-managed-service-for-prometheus/ (Opens in new window or tab)
+https://techcommunity.microsoft.com/t5/azure-observability-blog/introducing-azure-monitor-managed-service-for-prometheus/ba-p/3600185 (Opens in new window or tab)
+Managed Prometheus Dashboard example: +
+This section details how the Flux Dashboard creation and population was automated. The following document was referenced:
+https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/monitor-gitops-flux-2
+These are the steps that were carried out:
+The 'Grafana Admin' permission was granted to the ADO SSV3 (ADO-DefraGovUK-AAD-ADP-SSV3) service principal on the Azure Managed Grafana instance. This is required to allow the pipeline to create the Dashboards in Grafana
+A PowerShell script was created to check if the 'Flux' folder and the new dashboards exist. If they don't exist the script will create them. PowerShell Script
+The Dashboard json templates were taken from: + GitOps Flux - Application Deployments Dashboard + Flux Control Plane + Flux Cluster Stats
+The 'Reader' permission was granted to the Grafana system assigned identity on the environment subscription. e.g. AZD-ADP-SND1
+Configure Azure Monitor Agent to scrape the Azure Managed Flux metrics by creating a configmap. This change was made in the adp-flux-core repository.
+Flux Dashboard Example: +
+ + + + + + + + + + + + + + + + +Azure Network Watcher provides a suite of tools to monitor, diagnose, view metrics, and enable or disable logs for the ADP Platform, specifically, the AKS Clusters.
+++Network Watcher is enabled automatically in a virtual network's region when we create or update the virtual network in a subscription.
+
The Network Watcher resource in each ADP Platform subscription and region is created in the NetworkWatcherRG
resource group.
Below is a screenshot for Network Watcher instances in the Sandpit environments.
+ +Flow Logs are enabled for the NSGs associated with AKS Cluster subnets. Flow Logs are vital to monitor, manage, and know the ADP Platform virtual networks (one per environment) so that they can protected and optimised. They enable tracking and being able to monitor the following: +- Current state of the network +- Who is connecting, and where users are connecting from. +- Which ports are open to the internet +- What network behavior is expected, what network behavior is irregular, and when sudden rises in traffic happen.
+++Network Security Group (NSG) Flow Logs Retention Period is set to 30 days.
+Retention is available only if you use general-purpose v2 storage accounts. An ADP Platform storage account has been created in each ADP subscription for the flow logs.
+
Network Watcher VNet flow logs capability overcomes some of the existing limitations of NSG flow logs. e.g. VNet flow logs avoid the need to enable multi-level flow logging such as in cases of NSG flow logs where network security groups are configured at both subnet & NIC.
+++ + + + + + + + + + + + + + + + +VNet flow logs is currently in PREVIEW. So will not be implemented until it is GA. The preview version is not the available in UK South and UK West regions.
+
Azure Monitor will be used to monitor the health and performance of the Kubernetes clusters and the workloads running on them.
+The AKS Cluster generates metrics (Platform and Prometheus Metrics
) and logs (Activity and Resource Logs
), refer to Monitoring AKS data reference for detailed information. Custom metrics will be enabled automatically since the AKS cluster uses managed identity authentication.
Source: https://learn.microsoft.com/en-us/azure/aks/monitor-aks
+The diagram below shows the different levels of monitoring.
+ +Source: https://learn.microsoft.com/en-us/azure/azure-monitor/containers/monitor-kubernetes
+Azure Monitor now offers a unified cloud native offering for Kubernetes monitoring +- Azure Monitoring Managed Service for Prometheus +- Azure Monitor Container Insights +- Azure Managed Grafana
+Container Insights stores its data in a Log Analytics workspace
. Therefore an ADP Platform Log Analytics Workspace has been created to store the AKS metrics and logs.
Enable Container insights for Azure Kubernetes Service (AKS) cluster
+After enabling Container Insights, you will be able to view the AKS Cluster in the list of monitored clusters.
+ +There are many reports available for Node Monitoring, Resource Monitoring, Billing, Networking and Security.
+ +The diagram below shows the insights on the Nodes. The other tabs when clicked would show insights for the Cluster, Controllers or Containers.
+ +There are many benefits to using the managed services, such as, automatic authentication and authorisation setup based on Azure AD identities and pre-built roles (Grafana Admin, Grafana Editor and Grafana Viewer). The managed Grafana service also comes with the capability to integrate with various Azure data sources through an Azure managed identity and RBAC permissions on your subscriptions. It also comes with default Grafana Dashboards as a base.
+The managed Grafana services has been installed as a shared resource in the SSV3 and SSV5 subscriptions, which are in the O365_DefraDEV and DEFRA Tenants respectively. SSV3 is used for the sandpit environments whilst SSV5 will be used for all environments in the DEFRA Tenant. These are DEV, DEMO, PRE and PROD.
+ +Azure Monitor managed service for Prometheus is a fully managed Prometheus-compatible service that supports industry standard features such as PromQL, Grafana dashboards, and Prometheus alerts.
+ +Azure Monitor managed service for Prometheus overview diagram
+This service requires configuring the metrics addon for the Azure Monitor agent, which sends data to Prometheus. +Azure Monitor managed service for Prometheus GA 23 May 2023. +General Availability: Azure Monitor managed service for Prometheus
+ + + + + + + + + + + + + + + + + +Azure Architecture (green), per ADP environment:
+Inserting developer provided service secrets & configuration (black):
+Inserting common/ platform provided secrets & configuration for services to use (purple):
+Example appConfig.yaml
+Example appConfig.dev.yaml
+ +Example appConfig.test.yaml
+ +ADO Variable Groups will be created per environment per business service (namespace) and be created automatically at part of the business services setup.
+This will be used for each services secrets following this format {program}-{project}-{env}. For example:
+For secrets within these variable groups they will need to fellow this naming convention of {program}-{project}-{service}-{var name}. For example:
+Whos is this for:
+TODO
+This page is a work in progress and will be updated in due course. Needs environments updated.
+The table below details the environments the Platform supports, there purposes, and whether they're mandatory for going live / on the RTL path.
+Principal Environment Name | +Use case | +Route-to-live | +Azure Environment Code/Subscription | +Additional Information | +Azure Tenant | +
---|---|---|---|---|---|
Tenant-Production | +Live Services, Public & Private beta. | +Yes | +AZR-ADP-PRD1 | ++ | Defra | +
Tenant-Pre-Production | +Automated Acceptance testing prior to production. | +Yes | +AZR-ADP-PRE1 | +VPN Required. | +Defra | +
Tenant-Demo | +External demonstrations, PEN tests etc. | +No | +AZR-ADP-TST2 | +VPN Required. | +Defra | +
Tenant-Test | +General testing, performance testing, exploratory testing | +No | +AZR-ADP-TST1 | +Intended for demo's to external and internal stakeholders | +Defra | +
Tenant-Development | +Development | +Yes | +AZR-ADP-DEV1 | +VPN Required. | +Defra | +
Tenant-Sandpit | +Pre-merge automated tests, Pull Request checks etc. | +No | +AZR-ADP-SND4 | +VPN Required. | +Defra | +
Infrastructure-Dev | +Testing infrastructure changes, proof of concepts, experimentations. Platform Team Only. | +No/N/A | +AZD-ADP-SND1, SND2, SND3 | +VPN Required. | +0365_DefraDev | +
Principal Environment Name | +Use case | +Route-to-live | +Azure Environment Code/Subscription | +Additional Information | +Azure Tenant | +
---|---|---|---|---|---|
Shared Services 3 | +Management and Shared Services - Test and all environments below. POC/development area. | +No/N/A | +AZD-ADP-SSV3 | +DefraDev Shared Services/management | +0365_DefraDev | +
Shared Services 5 | +Management and Shared Tests - Production and all environments below. Live services. | +No/N/A | +AZR-ADP-SSV5 | +Contains live ACR. Live shared services/management. | +Defra | +
The Subscriptions that map to the environments documentation can be found here.
+For all Service teams, the defined minimum 'route to live' path is:
+This means that Service Teams must have passed automated checks/smoke tests in the Pre-prod environment and any initial merge and validation checks in Development only before going Live. All other environments are there on-demand for teams. We may have additional environments if the future if needed, such as a dedicated ITHC/PEN Test area (or Demo can be used), but again these would be optional.
+=== note + When deploying from a previous image/artefact already deployed to an environment, no CI/build is required. Environments are selectable.
+ + + + + + + + + + + + + + + + +A key feature for tenant teams is the integration with Microsoft Dynamics 365 and Power Platform. When integration with these components, Service Principals are used to generate Bearer tokens to authenticate to the instance. This document describes the best practices, networking routes and pros and cons of different solutions.
+The Platform broadly aligns with the Defra Azure tenancies that the Microsoft SaaS products are hosted in. Namely, Defra and DefraDev. This reduces/removes the cross-tenant identity requirements and enables the use of Azure-managed identities with Microsoft Platform-managed secrets. A key requirement was to align identity solutions and ensure optimal automation.
+To connect to Azure services, the best practice approach is to use Azure Managed Identities. This can be used to connect to almost all Azure PaaS Services, as well as SaaS products such as Dynamics 365. This uses the industry standard OAuth 2 protocols - https://auth0.com/docs/get-started/authentication-and-authorization-flow/client-credentials-flow
+Why do we recommend this?
+Automation and security
+ADP Processes
+Example reference articles: https://blog.yannickreekmans.be/secretless-applications-use-azure-identity-sdk-to-access-data-with-a-managed-identity/ and https://www.eugenevanstaden.com/blog/d365-managed-identity/
+Using an MI to connect to Dataverse https://community.dynamics.com/blogs/post/?postid=09f639ba-5134-4bd1-8812-04e019b7b920
+A deeper understanding of App Regs, OAuth and connetivity can be found here - reference articles: https://learn.microsoft.com/en-us/power-apps/developer/data-platform/authenticate-oauth#app-registration
+https://www.vodovnik.com/2023/01/12/interacting-with-dataverse-data-from-azure-c/
+When integrating with SaaS products, there are networking considerations to consider in terms of network security and ingress/egress charges. When working with Azure PaaS and SaaS Services, a number of options may be available to you depending on the pattern.
+Virtual network integration ** DRAFT **
+Microsoft are introducing a number of enhancements to secure your applications running in Azure with SaaS products. One example is https://learn.microsoft.com/en-us/data-integration/vnet/data-gateway-power-platform-dataflows
+These tools allows you to securely connect Azure Services to products like Power Platform and Dynamics securely - all within your own VNet without any public internet exposure, at lower cost.
+ADP is building a future pattern around these scenarios and will be fully detailed here shortly. As the ADP Cluster is within an Azure VNET, VNET Integration is required for secure/none public connectivity. Alternatively, whitelisting via the ADP Front Door can be used to secure integrations. A pattern is being developed for this approach.
+TBC -
+ + + + + + + + + + + + + + + + +The integration patterns that can be utilized in ADP will be defined here. This may be internal and external patterns, including with Azure services, SaaS products, and third-party services.
+A number of services are in scope and/or have defined patterns, including:
+This page contains an overview of the roles and permissions within ADP (Azure Development Platform). It outlines the different roles such as Platform User, Technical Team Member, Delivery Team Admin, Delivery Programme Admin, and ADP Admin, along with their respective descriptions and responsibilities. Explains the permissions associated with each role in the ADP Portal, Azure DevOps, and GitHub. It describes how permissions are stored in a database and Azure AD using AAD groups. Users are assigned to specific groups based on their roles, granting them the necessary permissions in the ADP Portal, GitHub, Azure, and Azure DevOps.
+The table below details the roles in the Platform, their scope and description:
+Role | +Scope | +Description | +
---|---|---|
Platform User | +Platform | +A user of the ADP Platform, who has access to the ADP Portal and can be a member of a Delivery Project or Programme. To do this, they must have a Cloud or DefraGovUK Account. | +
Technical Team Member | +Delivery Project | +Tech Lead, Tester, Developer, or Architect on the Delivery Project team. | +
Delivery Team Member | +Delivery Project | +Member of the Delivery Project team. | +
Delivery Team Admin | +Delivery Project | +Tech lead and/or Delivery Manager for the Delivery Project team. | +
Delivery Programme Admin | +Delivery Programme | +Administers Delivery Programmes within the ADP Portal. | +
ADP Admin | +Platform | +ADP Platform Engineering delivery team member. | +
CCoE Engineer | +Organization | +Cloud Center of Excellence engineer. | +
ADP Service Account | +Platform | +Service account used by automation within ADP. | +
Info
+Please note: if a user holds multiple roles, they will receive the combined permissions associated with all their roles. This ensures that they have access to all the rights and privileges granted by the most significant role they possess. Essentially, the role with the highest level of permissions takes precedence.
+The permissions for the portal are stored both in a database and in Azure AD with the use of AAD groups. The group assignments and naming convention are as follows:
+Delivery AAG-Users-ADP-{programme}-{delivery project}_NonTechUser
AAD group.AAG-Users-ADP-{programme}-{delivery project}_TechUser
AAD group.AAG-Users-ADP-{programme}-{delivery project}_Admin
AAD group.AAG-Users-ADP-{programme}_Admin
AAD group.AAG-User-ADP-PlatformEngineers
AAD group.By being added to these groups in Azure AD via the ADP Portal, users will be granted the permissions for their role in the ADP Portal.
+The permissions for each role in the ADP Portal are detailed below.
+ADP Portal Permissions for the Platform User role:
+ADP Portal Permissions for the Delivery Project Team Member role:
+ADP Portal Permissions for the Technical Team Member role:
+ADP Portal Permissions for the Delivery Team Admin role:
+ADP Portal Permissions for the Delivery Programme Admin role:
+ADP Portal Permissions for the ADP Admin role: +- Full access to the ADP Portal and is admin for all ALBs, delivery projects, programmes, etc.
+GitHub Permissions are assigned and managed using GitHub teams. The following GitHub teams are automatically assigned to each repository owned by a Delivery Project:
+ADP-{programme}-{delivery project}-Contributors
GitHub team: contains all Delivery Project Technical Team MembersADP-{programme}-{delivery project}-Admins
GitHub team: contains users that have been assigned both the Technical Team Member & Delivery Team Admin role for the Delivery ProjectADP-Platform-Admins
GitHub team: contains the ADP Admins.Info
+Please Note: Users that have not been asssigned the Technical Team Member role for a Delivery Project will not be given any GitHub permissions. Delivery Programme Admins & Delivery Project Team Admins can use the ADP Portal to add and remove users from their GitHub teams in via the add/ remove user functionality.
+Info
+Please Note: By default all repositories are public and can be accessed by anyone. Anyone can clone, fork, and view the repository. However, only members of the GitHub team will be able to push changes to the repository.
+Technical Team Members are given the following permissions in GitHub:
+Users that have been given both the Technical Team Member & Team Admin role within a Delivery Project are given the following permissions in GitHub:
+ADP Admins are given the following permissions in GitHub:
+TODO
+TBC
+For Azure permissions we use AAD group to given users the correct level of permissions. There are the key groups are for Azure permissions are as follows:
+AAG-Users-ADP-{programme}-{delivery project}_TechUser
AAD group.AAG-User-ADP-PlatformEngineers
AAD group.Info
+Users with Delivery Team Admins, Delivery Programme Admins, or Delivery Team Members roles only will not be given any Azure permissions. They can add, edit, or remove users from their delivery projects AAD groups in the ADP Portal by the add/ remove user functionality in the ADP Portal.
+Technical Team Members are given the following permissions in Azure: +- ...
+Should this be done here or in an another page?
+Resource group +Database
+TODO
+TBC
+ADP-ALB-ProgrammeName-DeliveryProjectName-Contributors - For Technical Team Members (write access level to the repo)
+ADP will use Technical Team members GitHub account to assign permissions in SonarCloud. Assuming that this GitHub account has been added to the DEFRA's SonarCloud organisation, ADP will assign their GitHub account to the their Delivery Project's SonarCloud group when they are added to a Delivery Project in the ADP Portal. Giving them access to do the required actions for their Delivery Project within SonarCloud.
+Info
+By default all Sonar Cloud projects are public and can be accessed by anyone in read only mode.
+ADP portal creates a SonarCloud user group and permissions template per Delivery Project on creation using the {Delivery Project Team name}
as the groups name. This group will filter on SonarCloud projects by the Delivery Project's ADP namespace or alias fields. For example if project FCP ACD has a ADP namespace of fcp-acd
and a alias of ffc-acd
group will have permissions on Sonar Cloud project starting with fcp-acd*
or ffc-acd*
(ffc-acd-frontend, fcp-acd-backend, etc).
Warning
+SonarCloud projects that do not include the delivery projects ADP namespace or alias in the name of the project in Sonar Cloud will not be included in the group permissions. An Sonar Cloud Organisation Admin will need to add the service to the group permissions manually.
+Each Technical Team Member will be added to the SonarCloud user group for the Delivery Project they are a member of in Sonar Cloud. The permissions for the group are as follows for each service in Sonar Cloud:
+ADP Admins will be able to see all services (SonarCloud projects) created by ADP's automation in Sonar Cloud. These are the permissions for the ADP
user group in Sonar Cloud as the Sonar Cloud project level:
ADP requires these permissions in order to run perform API administration tasks in Sonar Cloud at the organisation level. These permissions are required to create the user groups, permissions templates, and add users to the permissions templates in Sonar Cloud. The permissions are as follows:
+Details of SonarCloud permissions.
+Current known Sonar Cloud Web API Actions:
+codeviewer
, issueadmin
, securityhotspotadmin
, scan
, and user
to the group added to permissions template.Info
+Not possible to add new users directly to github organisation. User will need to be added to the Sonar Cloud organisation manually by a Sonar Cloud Organisation Admin or allow for member synchronization on DEFRA GitHub organisation.
+TODO
+This page is a work in progress and will be updated in due course.
+This article details the AI Services Architecture for the solution at a high level.
+AI Services supported by ADP:
+ +Warning
+Please ensure you fellow DEFRA's guidelines and policies when using AI services. This includes the use of data and the use of AI services in general in order to ensure your delivery project is using AI responsibly.
+Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-4 Turbo with Vision, GPT-3.5-Turbo, and Embeddings model series. In addition, the new GPT-4 and GPT-3.5-Turbo model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, image understanding, semantic search, and natural language to code translation.
+The Azure Development Platform (ADP) supports a range of models within Azure OpenAI. However, the availability of these models is limited to those supported by Azure in the UK South region. The following table lists the supported models:
+Model | +Deployment | +Quota | +Description | +
---|---|---|---|
gpt-4 | +gpt-4 | +80k | +An improvement on GPT-3.5, capable of understanding and generating both natural language and code. | +
gpt-35-turbo | +gpt-35-turbo | +350k | +An improvement on GPT-3, capable of understanding and generating both natural language and code. | +
text-embedding-ada-002 | +text-embedding-ada-002 | +350k | +Converts text into numerical vector form to facilitate text similarity comparisons. | +
text-embedding-3-large | +text-embedding-3-large | +350k | +The latest and most capable model for converting text into numerical vector form for text similarity comparisons. Please note that upgrading between different embedding models is not supported. | +
Warning
+All Delivery Projects should be mindful of the quota restrictions for each model per subscription/region. These models are shared resources among all Delivery Projects. If more quota is needed, ADP can make a request. However, please note that any increase in quota is subject to Microsoft’s approval.
+Within the ADP Platform, Azure Open AI Services are deployed with an Azure API Management (APIM) to provide a secure and scalable API endpoint for the AI services. The APIM is used to manage the API lifecycle, provide security, and monitor the API usage. The APIM is deployed in a Subnet (/29) and uses a private link to connect to the Azure OpenAI service. Additionally, a developer portal is deployed with the APIM, offering self-service API documentation for the AI services. Between the delivery projects service and the APIM, private link will be implemented and the APIM will use the services' managed identity with the role of Cognitive Services OpenAI User
assigned. This will allow the APIM to access the AI services on behalf of the delivery project's service privately and securely.
For any other Azure services that require access to the AI services, they will need to utilize the APIM endpoint and the managed identity.
+To meet the timelines and requirements of the delivery projects, our initial step will be to deploy the Azure OpenAI Service. We will provide the AKS cluster and Azure AI Search with direct access to this service over a private endpoint, assigning the role of Cognitive Services OpenAI User
.
This approach will enable the delivery projects to begin utilizing the AI services and provide valuable feedback. Once the APIM is deployed, we will transition the AI services to APIM. This will establish a secure and scalable API endpoint for the AI services.
+Note
+For local development purposes, the delivery projects can directly use the Azure Open AI services in the SND environment only, provided they are connected to the DEFRA VPN or using a DEFRA laptop. This setup will facilitate testing of the AI services and the provision of feedback.
+For local development, developers will be able to access SND only via the DEFRA VPN or DEFRA laptop with the assigned role of Cognitive Services OpenAI User
. Giving the developers the ability to test Azure Open AI services locally via APIM and view model deployments of the deployment Azure Open AI service.
For Dev plus environments, developers will be able to access SND only via the DEFRA VPN or DEFRA laptop with the assigned role of Cognitive Services OpenAI User
. This will currently allow them to:
Note
+In Dev plus environments, APIM endpoints will not be exposed for local developer access. They will remain accessible only to ADP Azure services or authenticated Delivery project's services.
+APIM is used to enhance the monitoring of OpenAI usage per service. You can utilise an OpenAI Emit Token Metric policy to send metrics to Application Insights about consumption of large language model tokens through Azure OpenAI Service APIs. Token count metrics include: Total Tokens, Prompt Tokens, and Completion Tokens with additional dimensions such as the User ID and API ID. By leveraging these metrics, ADP can monitor usage per service. This information can be utilized by both the project delivery teams and ADP to infer usage, potentially request additional quota if required, and for billing purposes.
+To monitor Azure Open AI directly we enable the Azure OpenAI Service Diagnostic Logs. This allows us to monitor the usage of the AI services directly. The log data is stored in Azure Monitor (Log Analytics Workspace), where both ADP and the delivery projects can gain insights such as:
+The Azure Open AI services are shared between delivery projects and have a quota limit per subscription/region. The quota limit is shared between the delivery projects and can be increased if required. The quota limit will be monitored by the ADP team and will be increased if required with the approval from Microsoft.
+Given the current needs and requirements of the ADP platform, we have opted for the pay-as-you-go pricing model for the Azure Open AI services. This allows the delivery projects to pay only for what they use, eliminating concerns about overcharging. However, this does mean that there is a Tokens-per-Minute limit per model for the Azure Open AI services. Delivery projects need to be aware of this limit when using the AI services.
+To manage this and ensure efficient use of Azure Open AI, APIM will provide policy enforcement to manage the quota limit and offer a better experience to the delivery projects when the quota limit is reached:
+Enable semantic caching of responses to Azure OpenAI API requests to reduce bandwidth and processing requirements imposed on the backend APIs and lower latency perceived by API consumers. With semantic caching, you can return cached responses for identical prompts and also for prompts that are similar in meaning, even if the text isn't the same.
+Currently, the range of models that can be deployed is limited to those supported by Azure in the UK South region. In the future, we will look to provide a broader selection the models for delivery projects to utilise. This includes potential support for additional models such as Whisper, DALL-E, and GPT-4o models.
+Reserved capacity, also known as Provisioned Throughput Units (PTU), is a feature of Azure OpenAI. The newer offering, PTU-M (Managed), abstracts away the backend compute, pooling resources. Beyond the default TPMs described above, this Azure OpenAI service feature, PTUs, defines the model processing capacity. It uses reserved resources for processing prompts and generating completions.
+PTUs are purchased as a monthly commitment with an auto-renewal option, which will reserve Azure OpenAI capacity within an Azure subscription, using a specific model, in a specific Azure region. TPM and PTU can be used together to provide scaling within a single region.
+Please note, PTU minimums are very expensive. This requires ADP to be at a certain scale to justify the cost across its Delivery Projects.
+Azure AI Search (formerly known as "Azure Cognitive Search") provides secure information retrieval at scale over user-owned content in traditional and generative AI search applications.
+Information retrieval is a foundational aspect of any application that surfaces text and vectors. Common use cases include catalog or document search, data exploration, and increasingly, chat-style applications over proprietary grounding data. When creating a search service, you’ll work with the following capabilities:
+ADP provides a managed Azure AI Search service for delivery projects to use which is scalable and secure using best practice. The core components (indexes, datastores, etc) of the Azure AI Search will be dependently deployable and can be created by delivery projects as required on a self-service basis.
+Warning
+Features that require external services will be limited to what has already been previously request by Delivery Projects. This normally effects supported data sources and skillsets. If a Delivery Project requires a new data source or skillset, they will need to request this from ADP and it will be reviewed and approved if it is required and does not affect the current delivery projects
+Supported features of Azure AI Search:
+ADP has selected a Standard SKU for the Azure AI Search service as it provides a cost effective balance of storage and query capacity for the delivery projects. Azure AI Search is a shared service between the ADP Delivery Projects. Allowing up to 50 indexes and 50 indexers in total and 35 GB of storage per partition and 160 GB of storage with two replicas, requiring two search unit per environment. This will allow 99.9% availability for read operations.
+Note
+The ADP Team can increase the tier and the number of search units (replicas and partitions) as required. Under the current scope of the delivery projects, the standard SKU with two search units is sufficient, allowing for 99.9% availability for read operations. If a project requires 99.9% availability for read/write operations, additional search units can be added.
+Azure AI Search will not be reachable from the public internet and will only be accessible via a private link to DEFRA VPN, DEFRA laptops, or consuming Azure/ Delivery Project services via a private endpoint.
+Delivery Project services will be given the role of Search Index Data Contributor
scoped to the indexes that the service requires. This will allow the Read-write access to the content of these indexes.
Azure AI Search will need access to Azure Open AI embedding models to allow for semantic search in the search indexes and for use in its skill sets. No direct access to the Azure open AI services will be allowed and will only be accessible via the Azure API Management endpoint. To ensure that Azure AI Search has efficient access, the role of Cognitive Services OpenAI User
will be assigned to the Azure AI Search service system assigned managed identity. This will allow the Azure AI Search service to access the Azure Open AI services via the APIM endpoint over a private link securely and efficiently.
For SND environments, developers will be able to access the Azure AI Search service via the Azure Portal with the assigned role of Reader
across the whole of the AI Search service, as well as the role of Search Index Data Contributor
scoped to the indexes that are created for their Delivery Project. This will allow the Read-write access to content of these indexes and also import, refresh, or query the documents collection of an index. It facilitates local development and testing of the Azure AI Search service.
For Dev plus environments, developers will be able to access the Azure AI Search service via the Azure Portal with the assigned role of Reader
. This role currently enables them to read across the entire service, including search metrics, content metrics (storage consumed, number of objects), and the object definitions of data plane resources (indexes, indexers, and so on). However, they won’t have access to read API keys or content within indexes, thereby ensuring the security of the data control plane.
Developers in all environments will be able to access Azure AI Search only via DEFRA VPN or DEFRA laptop with restrictions that are detailed above.
+Note
+Azure AI Search doesn't monitor individual user access to content on the search service. If you require this level of monitoring, you need to implement it in your client application.
+The diagnostic logs for Azure AI Search are stored in Azure Monitor (Log Analytics Workspace). This allows ADP and the delivery projects to gain insights into aspects such as latency, errors, and usage of the search service.
+The Azure AI Search service will be deployed as a common service for use by all Delivery Projects. For self-service creation and updating of the Azure AI Search Components, developers will be able to use ADP PowerShell scripts and a JSON definition file. These components will be created within the X repository, ensuring that the components are consistently created across all projects and environments using Azure Pipelines.
+????
+TODO
+This page is a work in progress and will be updated in due course.
+This article details the Data Services Architecture for the solution at a high level.
+Data Services supported by ADP:
+TODO
+This page is a work in progress and will be updated in due course.
+This article details the Integration Services Architecture for the solution at a high level.
+Integration Services supported by ADP:
+ +Scaling of the Platform
+TODO
+This page is a work in progress and will be updated in due course.
+On the Platform we're acutely aware we'll be hosting a large variety of services, with a wide range of requirements. We need to understand and define how we'll manage and scale, based on legitimate requirements and workload demands. Here, we'll cover this in terms of tenant applications, azure infrastructure, storage capacity, service limits, pipelines, ADO projects etc.
+The platform. But specifically:
+We'll use a range of tools and features based on the component which we'll detail below. We'll cover both our infrastructure and service scaling configuration, as well as how we scale Platform Teams, Tenants and Services appropriately (i.e. supporting tooling, pipelines, projects, repos, etc.).
+The Tech Radar is a tool to inspire and support teams to pick the best technologies for new projects. It is a visualisation of the technologies that are in use and recommended by the majority of teams. The radar is split into 4 quadrants and 4 rings.
+ +The Tech Radar should have 4 quadrants. Each Entry in the technology stack is represented by a blip in a quadrant. The quadrants represent broad categories that entries fit into.
+The Tech Radar should have 4 rings with the central ring representing entries that are in use and recommended by the majority of teams. Whilst the outer ring represents entries that are not recommended and for which we recommend teams transitions to a recommended entry.
+The below entries are taken from the technology stack in the FFC-Development-Guide.
+⚠️ Need to confirm whether any of the categories in the above linked technology stack would be appropriate Portal quadrants
+Entry | +Quadrant | +Ring | +Note | +
---|---|---|---|
Node.js | +Languages & Frameworks | +Adopt | ++ |
Hapi.js | +Languages & Frameworks | +Adopt | ++ |
NPM | +Languages & Frameworks | +Adopt | ++ |
.Net | +Languages & Frameworks | +Adopt | ++ |
Python | +Languages & Frameworks | +Assess | ++ |
Nuget | +Tooling | +Adopt | ++ |
Docker | +Tooling | +Adopt | ++ |
Docker Compose | +Tooling | +Adopt | ++ |
Helm | +Tooling | +Adopt | ++ |
Bicep | +Tooling | +Adopt | ++ |
Azure CLI | +Tooling | +Adopt | ++ |
PowerShell | +Tooling | +Adopt | ++ |
Azure boards | +Tooling | +Adopt | ++ |
Jenkins | +Tooling | +Hold | ++ |
Azure pipelines | +Tooling | +Adopt | ++ |
Jest | +Test Tooling | +Adopt | ++ |
xUnit | +Test Tooling | +Assess | ++ |
nUnit | +Test Tooling | +Adopt | ++ |
Pact Broker | +Test Tooling | +Adopt | ++ |
Web Driver IO | +Test Tooling | +Adopt | ++ |
Cucumber | +Test Tooling | +Adopt | ++ |
Selenium | +Test Tooling | +Adopt | ++ |
BrowserStack | +Test Tooling | +Adopt | ++ |
JMeter | +Test Tooling | +Adopt | ++ |
Snyk | +Test Tooling | +Adopt | ++ |
OWASP Zap | +Test Tooling | +Adopt | ++ |
AXE? | +Test Tooling | +Adopt | ++ |
WAVE | +Test Tooling | +Adopt | ++ |
Anchor Engine | +Test Tooling | +Adopt | ++ |
Azure Kubernetes Service | +Infrastructure | +Adopt | ++ |
Flux CD | +Infrastructure | +Adopt | ++ |
Azure Service Operator | +Infrastructure | +Adopt | ++ |
Azure Container Registry | +Infrastructure | +Adopt | ++ |
Azure PostgreSQL (flexible Server) | +Infrastructure | +Adopt | ++ |
Azure Service Bus | +Infrastructure | +Adopt | ++ |
Azure Event Hubs | +Infrastructure | +Assess | ++ |
Azure App Configuration | +Infrastructure | +Adopt | ++ |
Azure Key Vault | +Infrastructure | +Adopt | ++ |
Azure Functions | +Infrastructure | +Hold | +*must be containerized in AKS | +
Azure Storage | +Infrastructure | +Adopt | ++ |
Entra ID workload Identity | +Infrastructure | +Adopt | ++ |
Application Insights | +Observability | +Adopt | ++ |
Azure Repos | +Tooling | +Hold | ++ |
GitHub | +Tooling | +Adopt | ++ |
SonarCloud | +Tooling | +Adopt | ++ |
Docker Desktop | +Tooling | +Adopt | ++ |
Google Analytics | +Observability | +Adopt | ++ |
Prometheus | +Observability | +Adopt | ++ |
Grafana | +Observability | +Adopt | ++ |
Azure Monitor | +Observability | +Adopt | ++ |
Visual Studio 2022 | +Tooling | +Adopt | ++ |
Visual Studio Code | +Tooling | +Adopt | ++ |
App Reg's | +Tooling | +Adopt | ++ |
Azure CosmosDB (SQL) | +Tooling | +Adopt | ++ |
Azure CosmosDB (Mongo) | +Tooling | +Assess | ++ |
Azure AI Studio | +Infrastructure | +Assess | ++ |
Azure Machine Learning | +Infrastructure | +Assess | ++ |
Azure Cognitive Services | +Infrastructure | +Assess | ++ |
Azure AI Search | +Infrastructure | +Assess | ++ |
Azure Prompt Flow | +Infrastructure | +Assess | ++ |
Overall platform strategy for ADP.
+TODO
+This page is a work in progress and is being updated in due course.
+Move fast, with stable infrastructure and standardized delivery
+Why? By providing a stable infrastructure environment with common, tried and trusted delivery processes, we'll enable application teams to move faster and realize business value quicker.
+ +Our reach is Defra Azure wide, and will be able to be used by any product development team across the Digital delivery programme. We will focus on hosting and running digital transactional business applications.
+To achieve our product vision of "Build Apps - Not Infra", the following (initial) goals and objectives will need to be achieved as part of the Platform delivery.
+For ADP there is going to be two data sources for documentation one that can be external (ADP Documentation) and another that will be internal (ADP Documentation Internal) both will be contributed to on GitHub via a GitHub Git repository. Our approach is to fellow GDS's Service standard of "Make new source code open" for our documentation making most of our documentation open to the public allowing it to be easily viewed by third parties. Making it available for reuse on an open licence while still keeping ownership of the intellectual property, enabling across government collaboration and ease of support for existing and future projects with Defra. For the minority of documentation which is classified as possible sensitive information will be available via ADP Internal Documentation.
+Diagram of our approach:
+ +Explanation:
+ADP Documentation (External)
+Portal Link: https://portal.snd1.adp.defra.gov.uk/docs/default/component/adp-documentation
+GitHub Pages: https://defra.github.io/adp-documentation (public)
+GitHub Repository: https://github.com/DEFRA/adp-documentation
+What will be stored here:
+ADP **Documentation Internal**
+Portal Link: https://portal.snd1.adp.defra.gov.uk/docs/default/component/adp-documentation-internal
+GitHub Pages: https://defra.github.io/adp-documentation-internal/
+GitHub Repository: https://github.com/DEFRA/adp-documentation-internal (private)
+What will be stored here:
+This article outlines the Platform service deployment strategies available. Development teams should read the Platform Versioning and Git strategy document before reading this. ADP’s primary deployment strategy is Rolling Deployments on AKS with HELM and FluxCD. This provides Platform services with a zero-downtime deployment strategy. This allows applications to achieve high availability with low/no business impact to live service. This is important for services that need 24/7 availability and allows the capability to deploy to production multiple times a day. In the future, we will support other deployment strategies, such as Blue-Green and Canary deployments.
+ADP uses AKS (Kubernetes) with HELM Charts and Flux to perform rolling deployments. The default strategy applied to all services is rolling deployments, unless otherwise specified in the deployment YAML. We recommend starting with this strategy. This strategy allows for applications to be incrementally updated without downtime. +There are 3 core parts to a Service deployment/upgrade, which are done in the following order:
+The deployment process flow:
+A new deployment is triggered via the CI & CD Pipelines for the Service:
+New app Secrets are imported/updated/deleted* in the Key Vault and are mastered in the Azure DevOps (ADO) Secret Library Groups for the service.
+New App Configuration keys and values are imported/updated/deleted in the Service Config Maps & App Configuration Service from the Service’s ‘appConfig.yaml’ files. Note: The sentinel key is not updated yet.
+The new images and artefact are pushed to the environment Container Registry (ACR) (via pipeline deployment) and Flux updates the Services repository with the new version to be deployed:
+Flux reconciles the Cluster with the new Web App code and Infrastructure versions requested with a rolling update. Any infrastructure updates take precedence over Application (Infra > App). + Application deployment:
+Infrastructure deployment:
+Database deployment:
+If a user has requested the deployment of App Config/Secrets only via the Flag in the build.yaml, the App or Infra will not be deployed on this release:
+Note
+All releases / deployments are promoted via the Common CI and CD Pipelines using Azure DevOps as the orchestrator. We promote continuous delivery with automated checks and tests above/in preference to manual intervention and approvals. Approval gates can be added optionally to Azure Pipelines to gate the promotion of code.
+All services will have the following settings defaulted (changeable if required):
+Constraints
+This article outlines a two-phase versioning strategy for services on ADP with the goal to support ephemeral environments by phase 2.
+The following Git and Versioning strategies are in place and mandated:
+In Phase 1, before ephemeral environments, Feature branch builds fetch the version from the main branch’s package.json file for Node and the .csproj file for C#. If the versions are the same, a validation error is thrown; if the feature branch version is higher, it's tagged with ‘-alpha*’ and the pipeline build ID. When the main branch version is pushed to the ACR on deployment after merging into main, it will take precedence over all feature (alpha) candidates of the same major/minor/patch version.
+In Phase 2 with ephemeral environments, the process remains the same for Feature branches. For Pull Request (PR) builds, if the package.json/csproj is not updated, a validation error is thrown; if it is updated, the image/build is tagged with a release candidate (-RC) and the build ID. The main branch version takes precedence over all Feature (alpha & RC) candidates. With ephemeral environments, each feature deployment will deploy a unique pod (application & infrastructure).
+Feature branch build and deployments
+Pull Request (PR) builds and deployments
+No change for Phase 1, including tagging and naming. Developers merge (feature branch) version must be always above main.
+Main branch build and deployments
+Feature branch builds and deployments
+Pull Request (PR) - builds and deployments
+Main branch – build and deployments
+