Hi,
we've done a couple of test days of the SCAP tools, mainly openscap and
SCAP Workbench.
Some of the feedback we gathered is related to the SSG content and can't
be fixed in the tools. I think it could be valuable to the project to discuss
them and perhaps figure out a way to solve some of them.
I would like to thank Matt Reid and Martin Zember for providing most of the
feedback below.
1) Discoverability: It is easy to find some info about RHEL5 USGCB, the
assumption was that RHEL6 should be similar which is sadly not the case.
usgcb.nist.gov provides content for RHEL5, doesn't even mention anything
about RHEL6.
It is entirely ungooglable
- user expected to find it with: "automate usgcb linux",
"usgcb linux tool", "check linux against usgcb",
"linux security compliance", "usgcb audit linux".
- the only way to find it is to find openscap page first
and then look at related projects, unfortunately openscap
doesn't mention that scap-security-guide has USGCB...
2) Changing profile IDs: stig-rhel6-server was renamed to
stig-rhel6-server-upstream, documentation mentions stig-rhel6-server.
Why would an ID like this change?
3) Do most of the checks need to point back to "A conditional clause for
check statements"? Isn't that just a placeholder that doesn't do
anything?
“A conditional clause for check statements”
Do we need to have that placeholder dropdown? Can it be customized to be
useful in some way? All the other dropdowns offer reasonable options to
select from, as well as a default option. When it says it only takes
effect when the profile is used for evaluation, does that mean when
the profile is used for scanning? What do we consider evaluating?
(very commonly asked)
4) (about Rule descriptions) This is more a content issue, but while the
information is useful that we show after expanding, I'd more expect to
see why we're running the test, and why it's important that my system
pass that test.
5) "The success of automatic remediation greatly depends on content quality
and could result in broken machines if not used carefully!" What kind of
content might break stuff? Is there any way to know beforehand that your
content might cause issues? How are machines broken?
(Talking about a disclaimer in SCAP Workbench user manual.)
My reply:
Content can do virtually anything. There is no way to tell. This disclaimer
hints that users should really trust the content before running remediation.
All bets are off, I can write SCAP content that will wipe all data upon
remediation.
Second testing round, same issue:
It also doesn't convey the potential danger that the doc mentions about
this setting, where it could break your system. How reliable is the output?
Are the times when it breaks things edge cases or somewhat common? If it
isn't very reliable, I'm not sure we should be offering the option. I'd
suggest rewording the tooltip to be "Checking this attempts to automatically
remediate failed checks when scanning. Warning: in some situations, this
could result in misconfigured values."
My reply:
Generally I refrain from making statements about the content in workbench.
This really is a question for whoever ships the content. scap-workbench
cannot make sure remediation is reliable, only content authors can.
(about remediation, this is an issue all the SCAP projects share)
6) Why does a default profile contain many service folders that are enabled,
but without any checks enabled? Shouldn't the whole thing be disabled if it
isn't going to do anything with that section and clutter up the report when
run?
My reply:
This is a content issue again. Workbench just shows what the content does.
Both approaches are valid. Disabling the group is enough but SSG opts to
disable all the rules inside and leave the group enabled.
7) Is the purpose of the Introduction folder with General Principles and How
to Use This Guide simply to have a click target that can educate users? I'm
not sure why those can be enabled/disabled, when they don't seem to affect
the scan results at all.
My reply:
Yup. In the past we did a poor job at displaying front matter, notices and
related elements. That's why content authors started to abuse Group
descriptions for this purpose. Now that we do a good job and displaying the
notices these practices still remain.
See:
https://github.com/OpenSCAP/scap-security-guide/issues/357
8) It might be good to start off what I'm going to call the remediation section
(underneath Identifiers and References) to have a one sentence summary
explaining why something failed. While you can usually figure it out pretty
easily, the naming of the checks themselves and the status, without any
interpretation can be initially confusing. My Shared Library Files Have
Restrictive Permissions test failed, does that mean my library files are too
restrictive? not restrictive enough? A clear action item for how to address
the failure would be nice because the remediation section seems to always
say the same thing, regardless of a pass or fail.
(Not fixable in a generic way but maybe we can change the texts to help.)
9) Could be reworded to be more concise and clearer - On Password Minimum
Length modal details: "Nowadays recommended values, considered as secure by
various organizations focused on topic of computer security, range from 12
(FISMA) up to 14 (DoD) characters for password length requirements. If
a program consults /etc/login.defs and also another PAM module (such as
pam_cracklib) during a password change operation, then the most restrictive
must be satisfied. See PAM section for more information about enforcing
password quality requirements."
(The description is probably from an official source, not sure if we can
fix that.)
10) If there are certain rules that check for specific, subjective numbers,
it would be good if we reported what the current value was within the report,
so they don't have to go track it down and see how far off they are. For
example, the password minimum age says that "A value greater than 1 day is
considered sufficient for many environments". The check failed on my system.
Is that because it isn't set or it's set too low? What if I've determined
that the suggested default isn't sufficient for my environment, and I'm
happy with what I have it set to? Do I even know from this modal that it was
looking for a value of 1? Couldn't there be a mismatch between what the
description says it's looking for and what the parameter checked for?
Without reporting both here, I'm not sure there's any way to know if that
were the case.
11) In Password Warning Age modal details: "A value of 7 days would be nowadays
considered to be a standard." Do we need to qualify with nowadays? Say
"A value of 7 days is considered to be standard.".
12) In SSH Root Login Disabled modal details: "The root user should never be
allowed to login to a system directly over a network." should be
"...allowed to log in to..."
(Again, not sure if we can change this.)
Keep in mind that this was all gathered as a side effect without focusing on
the content at all. If we had done content specific testing we might have
gathered more.
Anybody else doing any UX testing of the SCAP tools?
--
Martin Preisler