You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/assets/Markdown Files/USAGE.md
+16Lines changed: 16 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,5 +39,21 @@ To prepare you for there are some activities that we recommend you do before you
39
39
See [Maturity level 0](./usage/maturity-level-0) to learn about the important first steps.
40
40
41
41
42
+
## Dimensions
43
+
The DSOMM framework categorizes its activities into dimensions, each representing a key area of the software development lifecycle where security can be integrated and matured.
44
+
45
+
Dimensions Overview:
46
+
-**Build and Deployment**: Focuses on security practices in the CI/CD pipeline and deployment processes
47
+
-**Culture and Organization**: Addresses organizational culture, education, and processes that support security initiatives.
48
+
-**Implementation**: Covers secure coding and infrastructure hardening practices.
49
+
-**Information Gathering**: Involves gathering data for threat analysis, risk assessment, and metrics collection.
50
+
-**Test and Verification**: Focuses on testing practices to validate security measures and ensure continuous improvement.
51
+
52
+
For detailed information on each dimension, refer to [Dimensions](./usage/dimensions).
53
+
54
+
55
+
56
+
57
+
42
58
## Evidence
43
59
If your CISO requires you to document evidence that an activity is completed, you can edit your `generated.yaml` file as documented in the [README.md](./usage/README)_Teams and Groups_. It is currently not possible to provide evidence directly in the browser.
@@ -6,71 +6,22 @@ and the corresponding sub dimension.
6
6
The descriptions are highly based (mostly copied)
7
7
on the [OWASP Project Integration Project Writeup](https://github.com/OWASP/www-project-integration-standards/blob/master/writeups/owasp_in_sdlc/index.md).
8
8
9
-
## Implementation
10
-
11
-
This dimension covers topic of "traditional"
12
-
hardening of software and infrastructure components.
13
-
14
-
There is an abundance of libraries and frameworks implementing
15
-
secure defaults.
16
-
For frontend development, [ReactJS](https://reactjs.org/) seems to be
17
-
the latest favourite in the Javascript world.
18
-
19
-
On the database side, there are [ORM](https://sequelize.org/) libraries
20
-
and [Query Builders](https://github.com/kayak/pypika) for most languages.
21
-
22
-
If you write in Java,
23
-
the [ESAPI project](https://www.javadoc.io/doc/org.owasp.esapi/esapi/latest/index.html)
24
-
offers several methods to securely implement features,
25
-
ranging from Cryptography to input escaping and output encoding.
26
-
27
-
**Example low maturity scenario:**
28
-
29
-
The API was queryable by anyone and GraphQL introspection was enabled since
30
-
all components were left in debug configuration.
31
-
32
-
Sensitive API paths were not whitelisted.
33
-
The team found that the application was attacked when the server showed very
34
-
high CPU load.
35
-
The response was to bring the system down, very little information about
36
-
the attack was found apart from the fact that someone
37
-
was mining cryptocurrencies on the server.
38
-
39
-
**Example Low Maturity Scenario:**
40
-
41
-
The team attempted to build the requested features using vanilla NodeJS,
42
-
connectivity to backend systems is validated by firing an internal request
43
-
to `/healthcheck?remoteHost=<xx.xx.xx>` which attempts to run a ping
44
-
command against the IP specified.
45
-
All secrets are hard coded.
46
-
The team uses off the shelf GraphQL libraries but versions
47
-
are not checked using [NPM Audit](https://docs.npmjs.com/cli/audit).
48
-
Development is performed by pushing to master which triggers a webhook that
49
-
uses FTP to copy latest master to the development server which will become production once development is finished.
9
+
# Build and Deployment
50
10
51
-
**Example High Maturity Scenario:**
11
+
Secure configuration standards can be enforced during the deployment using the [Open Policy Agent](https://www.openpolicyagent.org/).
52
12
53
-
Team members have access to comprehensive documentation
54
-
and a library of code snippets they can use to accelerate development.
Pre-merge tests are executed before merging code into master.
60
-
Tests run a comprehensive suite of tests covering unit tests,
61
-
service acceptance tests,
62
-
unit tests as well as regression tests.
17
+
_please create a PR_
63
18
64
-
Once a day a pipeline of specially configured
65
-
static code analysis tools runs against
66
-
the features merged that day, the results are
67
-
triaged by a trained security team and fed to engineering.
19
+
**Example High Maturity scenario:**
68
20
69
-
There is a cronjob executing Dynamic Analysis tools against Staging
70
-
with a similar process.
21
+
The CI/CD system, when migrating successful QA environments to production, applies appropriate configuration to all components.
22
+
Configuration is tested periodically for drift.
71
23
72
-
Pentests are conducted against features released on every release
73
-
and also periodically against the whole software stack.
24
+
Secrets live in-memory only and are persisted in a dedicated Secrets Storage solution such as Hashicorp Vault.
74
25
75
26
# Culture and Organization
76
27
@@ -173,6 +124,105 @@ on Heroku with one click, it offers both CTF functionality and a self-service
173
124
Business continuity and Security teams run incident management drills
174
125
periodically to refresh incident playbook knowledge.
175
126
127
+
128
+
129
+
# Implementation
130
+
131
+
This dimension covers topic of "traditional"
132
+
hardening of software and infrastructure components.
133
+
134
+
There is an abundance of libraries and frameworks implementing
135
+
secure defaults.
136
+
For frontend development, [ReactJS](https://reactjs.org/) seems to be
137
+
the latest favourite in the Javascript world.
138
+
139
+
On the database side, there are [ORM](https://sequelize.org/) libraries
140
+
and [Query Builders](https://github.com/kayak/pypika) for most languages.
141
+
142
+
If you write in Java,
143
+
the [ESAPI project](https://www.javadoc.io/doc/org.owasp.esapi/esapi/latest/index.html)
144
+
offers several methods to securely implement features,
145
+
ranging from Cryptography to input escaping and output encoding.
146
+
147
+
**Example low maturity scenario:**
148
+
149
+
The API was queryable by anyone and GraphQL introspection was enabled since
150
+
all components were left in debug configuration.
151
+
152
+
Sensitive API paths were not whitelisted.
153
+
The team found that the application was attacked when the server showed very
154
+
high CPU load.
155
+
The response was to bring the system down, very little information about
156
+
the attack was found apart from the fact that someone
157
+
was mining cryptocurrencies on the server.
158
+
159
+
**Example Low Maturity Scenario:**
160
+
161
+
The team attempted to build the requested features using vanilla NodeJS,
162
+
connectivity to backend systems is validated by firing an internal request
163
+
to `/healthcheck?remoteHost=<xx.xx.xx>` which attempts to run a ping
164
+
command against the IP specified.
165
+
All secrets are hard coded.
166
+
The team uses off the shelf GraphQL libraries but versions
167
+
are not checked using [NPM Audit](https://docs.npmjs.com/cli/audit).
168
+
Development is performed by pushing to master which triggers a webhook that
169
+
uses FTP to copy latest master to the development server which will become production once development is finished.
170
+
171
+
**Example High Maturity Scenario:**
172
+
173
+
Team members have access to comprehensive documentation
174
+
and a library of code snippets they can use to accelerate development.
175
+
176
+
Linters are bundled with pre-commit hooks
177
+
and no code reaches master without peer review.
178
+
179
+
Pre-merge tests are executed before merging code into master.
180
+
Tests run a comprehensive suite of tests covering unit tests,
181
+
service acceptance tests,
182
+
unit tests as well as regression tests.
183
+
184
+
Once a day a pipeline of specially configured
185
+
static code analysis tools runs against
186
+
the features merged that day, the results are
187
+
triaged by a trained security team and fed to engineering.
188
+
189
+
There is a cronjob executing Dynamic Analysis tools against Staging
190
+
with a similar process.
191
+
192
+
Pentests are conducted against features released on every release
193
+
and also periodically against the whole software stack.
194
+
195
+
196
+
# Information Gathering
197
+
198
+
Concerning metrics, the community has been quite vocal on what to measure
199
+
and how important it is.
200
+
The OWASP CISO guide offers 3 broad categories of SDLC metrics[1] which can
201
+
be used to measure effectiveness of security practices.
202
+
Moreover, there is a number of presentations on what could be leveraged
203
+
to improve a security programme, starting from Marcus' Ranum's [keynote](https://www.youtube.com/watch?v=yW7kSVwucSk)
204
+
at Appsec California[1],
205
+
Caroline Wong's similar [presentation](https://www.youtube.com/watch?v=dY8IuQ8rUd4)
206
+
and [this presentation](https://www.youtube.com/watch?v=-XI2DL2Uulo) by J. Rose and R. Sulatycki.
207
+
These among several writeups by private companies all offering their own version of what could be measured.
208
+
209
+
Projects such as the [ELK stack](https://www.elastic.co/elastic-stack), [Grafana](https://grafana.com/)
210
+
and [Prometheus](https://prometheus.io/docs/introduction/overview/) can be used to aggregate
211
+
logging and provide observability.
212
+
213
+
However, no matter the WAFs, Logging, and secure configuration enforced
214
+
at this stage, incidents will occur eventually.
215
+
Incident management is a complicated and high stress process.
216
+
To prepare organisations for this, SAMM includes a section on [incident management](https://owaspsamm.org/model/operations/incident-management/) involving simple questions for stakeholders to answer so you can determine incident preparedness accurately.
217
+
218
+
**Example High Maturity scenario:**
219
+
220
+
Logging from all components gets aggregated in dashboards and alerts
221
+
are raised based on several Thresholds and events.
222
+
There are canary values and events fired against monitoring
223
+
from time to time to validate it works.
224
+
225
+
176
226
# Test and Verification
177
227
178
228
At any maturity level, linters can be introduced to ensure that consistent
@@ -225,48 +275,3 @@ The remediation effort was significant.
225
275
The application features received Dynamic Automated testing when each reached staging, a trained QA team validated business requirements that involved security checks.
226
276
A security team performed an adequate pentest and gave a sign-off.
227
277
228
-
# Build and Deployment
229
-
230
-
Secure configuration standards can be enforced during the deployment using the [Open Policy Agent](https://www.openpolicyagent.org/).
The CI/CD system, when migrating successful QA environments to production, applies appropriate configuration to all components.
241
-
Configuration is tested periodically for drift.
242
-
243
-
Secrets live in-memory only and are persisted in a dedicated Secrets Storage solution such as Hashicorp Vault.
244
-
245
-
## Information Gathering
246
-
247
-
Concerning metrics, the community has been quite vocal on what to measure
248
-
and how important it is.
249
-
The OWASP CISO guide offers 3 broad categories of SDLC metrics[1] which can
250
-
be used to measure effectiveness of security practices.
251
-
Moreover, there is a number of presentations on what could be leveraged
252
-
to improve a security programme, starting from Marcus' Ranum's [keynote](https://www.youtube.com/watch?v=yW7kSVwucSk)
253
-
at Appsec California[1],
254
-
Caroline Wong's similar [presentation](https://www.youtube.com/watch?v=dY8IuQ8rUd4)
255
-
and [this presentation](https://www.youtube.com/watch?v=-XI2DL2Uulo) by J. Rose and R. Sulatycki.
256
-
These among several writeups by private companies all offering their own version of what could be measured.
257
-
258
-
Projects such as the [ELK stack](https://www.elastic.co/elastic-stack), [Grafana](https://grafana.com/)
259
-
and [Prometheus](https://prometheus.io/docs/introduction/overview/) can be used to aggregate
260
-
logging and provide observability.
261
-
262
-
However, no matter the WAFs, Logging, and secure configuration enforced
263
-
at this stage, incidents will occur eventually.
264
-
Incident management is a complicated and high stress process.
265
-
To prepare organisations for this, SAMM includes a section on [incident management](https://owaspsamm.org/model/operations/incident-management/) involving simple questions for stakeholders to answer so you can determine incident preparedness accurately.
266
-
267
-
**Example High Maturity scenario:**
268
-
269
-
Logging from all components gets aggregated in dashboards and alerts
270
-
are raised based on several Thresholds and events.
271
-
There are canary values and events fired against monitoring
0 commit comments